00:00:00.001 Started by upstream project "autotest-per-patch" build number 121335 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.026 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.026 The recommended git tool is: git 00:00:00.027 using credential 00000000-0000-0000-0000-000000000002 00:00:00.030 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/dsa-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.050 Fetching changes from the remote Git repository 00:00:00.051 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.092 Using shallow fetch with depth 1 00:00:00.092 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.092 > git --version # timeout=10 00:00:00.149 > git --version # 'git version 2.39.2' 00:00:00.149 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.150 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.150 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.199 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.211 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.224 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:03.224 > git config core.sparsecheckout # timeout=10 00:00:03.237 > git read-tree -mu HEAD # timeout=10 00:00:03.253 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:03.279 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:03.279 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:03.415 [Pipeline] Start of Pipeline 00:00:03.429 [Pipeline] library 00:00:03.430 Loading library shm_lib@master 00:00:03.431 Library shm_lib@master is cached. Copying from home. 00:00:03.450 [Pipeline] node 00:00:03.474 Running on FCP10 in /var/jenkins/workspace/dsa-phy-autotest 00:00:03.476 [Pipeline] { 00:00:03.488 [Pipeline] catchError 00:00:03.490 [Pipeline] { 00:00:03.504 [Pipeline] wrap 00:00:03.515 [Pipeline] { 00:00:03.520 [Pipeline] stage 00:00:03.521 [Pipeline] { (Prologue) 00:00:03.912 [Pipeline] sh 00:00:04.198 + logger -p user.info -t JENKINS-CI 00:00:04.217 [Pipeline] echo 00:00:04.219 Node: FCP10 00:00:04.224 [Pipeline] sh 00:00:04.522 [Pipeline] setCustomBuildProperty 00:00:04.534 [Pipeline] echo 00:00:04.535 Cleanup processes 00:00:04.538 [Pipeline] sh 00:00:04.823 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.823 2421714 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:04.836 [Pipeline] sh 00:00:05.119 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:05.119 ++ grep -v 'sudo pgrep' 00:00:05.119 ++ awk '{print $1}' 00:00:05.119 + sudo kill -9 00:00:05.119 + true 00:00:05.134 [Pipeline] cleanWs 00:00:05.143 [WS-CLEANUP] Deleting project workspace... 00:00:05.143 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.150 [WS-CLEANUP] done 00:00:05.153 [Pipeline] setCustomBuildProperty 00:00:05.163 [Pipeline] sh 00:00:05.444 + sudo git config --global --replace-all safe.directory '*' 00:00:05.520 [Pipeline] nodesByLabel 00:00:05.521 Found a total of 1 nodes with the 'sorcerer' label 00:00:05.529 [Pipeline] httpRequest 00:00:05.534 HttpMethod: GET 00:00:05.534 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:05.538 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:05.541 Response Code: HTTP/1.1 200 OK 00:00:05.542 Success: Status code 200 is in the accepted range: 200,404 00:00:05.542 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:05.811 [Pipeline] sh 00:00:06.103 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:00:06.119 [Pipeline] httpRequest 00:00:06.123 HttpMethod: GET 00:00:06.124 URL: http://10.211.164.96/packages/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:06.126 Sending request to url: http://10.211.164.96/packages/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:06.129 Response Code: HTTP/1.1 200 OK 00:00:06.130 Success: Status code 200 is in the accepted range: 200,404 00:00:06.130 Saving response body to /var/jenkins/workspace/dsa-phy-autotest/spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:21.080 [Pipeline] sh 00:00:21.362 + tar --no-same-owner -xf spdk_d4fbb5733e2eaefcd7ce9a66f1ea6db59726d6f2.tar.gz 00:00:23.918 [Pipeline] sh 00:00:24.206 + git -C spdk log --oneline -n5 00:00:24.206 d4fbb5733 trace: add trace_flags_fini() 00:00:24.206 8571999d8 test/scheduler: Stop moving all processes between cgroups 00:00:24.206 06472fb6d lib/idxd: fix batch size in kernel IDXD 00:00:24.206 44dcf4fb9 pkgdep/idxd: Add dependency for accel-config used in kernel IDXD 00:00:24.206 3dbaa93c1 nvmf: pass command dword 12 and 13 for write 00:00:24.220 [Pipeline] } 00:00:24.237 [Pipeline] // stage 00:00:24.246 [Pipeline] stage 00:00:24.249 [Pipeline] { (Prepare) 00:00:24.268 [Pipeline] writeFile 00:00:24.285 [Pipeline] sh 00:00:24.591 + logger -p user.info -t JENKINS-CI 00:00:24.607 [Pipeline] sh 00:00:24.895 + logger -p user.info -t JENKINS-CI 00:00:24.908 [Pipeline] sh 00:00:25.195 + cat autorun-spdk.conf 00:00:25.195 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.195 SPDK_TEST_ACCEL_DSA=1 00:00:25.195 SPDK_TEST_ACCEL_IAA=1 00:00:25.195 SPDK_TEST_NVMF=1 00:00:25.195 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.195 SPDK_RUN_ASAN=1 00:00:25.195 SPDK_RUN_UBSAN=1 00:00:25.203 RUN_NIGHTLY=0 00:00:25.208 [Pipeline] readFile 00:00:25.234 [Pipeline] withEnv 00:00:25.236 [Pipeline] { 00:00:25.251 [Pipeline] sh 00:00:25.540 + set -ex 00:00:25.541 + [[ -f /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf ]] 00:00:25.541 + source /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:25.541 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:25.541 ++ SPDK_TEST_ACCEL_DSA=1 00:00:25.541 ++ SPDK_TEST_ACCEL_IAA=1 00:00:25.541 ++ SPDK_TEST_NVMF=1 00:00:25.541 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:25.541 ++ SPDK_RUN_ASAN=1 00:00:25.541 ++ SPDK_RUN_UBSAN=1 00:00:25.541 ++ RUN_NIGHTLY=0 00:00:25.541 + case $SPDK_TEST_NVMF_NICS in 00:00:25.541 + DRIVERS= 00:00:25.541 + [[ -n '' ]] 00:00:25.541 + exit 0 00:00:25.551 [Pipeline] } 00:00:25.569 [Pipeline] // withEnv 00:00:25.575 [Pipeline] } 00:00:25.591 [Pipeline] // stage 00:00:25.600 [Pipeline] catchError 00:00:25.602 [Pipeline] { 00:00:25.617 [Pipeline] timeout 00:00:25.617 Timeout set to expire in 50 min 00:00:25.619 [Pipeline] { 00:00:25.636 [Pipeline] stage 00:00:25.638 [Pipeline] { (Tests) 00:00:25.654 [Pipeline] sh 00:00:25.941 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/dsa-phy-autotest 00:00:25.941 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest 00:00:25.941 + DIR_ROOT=/var/jenkins/workspace/dsa-phy-autotest 00:00:25.941 + [[ -n /var/jenkins/workspace/dsa-phy-autotest ]] 00:00:25.941 + DIR_SPDK=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:25.941 + DIR_OUTPUT=/var/jenkins/workspace/dsa-phy-autotest/output 00:00:25.941 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/spdk ]] 00:00:25.941 + [[ ! -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:25.941 + mkdir -p /var/jenkins/workspace/dsa-phy-autotest/output 00:00:25.941 + [[ -d /var/jenkins/workspace/dsa-phy-autotest/output ]] 00:00:25.941 + cd /var/jenkins/workspace/dsa-phy-autotest 00:00:25.941 + source /etc/os-release 00:00:25.941 ++ NAME='Fedora Linux' 00:00:25.941 ++ VERSION='38 (Cloud Edition)' 00:00:25.941 ++ ID=fedora 00:00:25.941 ++ VERSION_ID=38 00:00:25.941 ++ VERSION_CODENAME= 00:00:25.941 ++ PLATFORM_ID=platform:f38 00:00:25.941 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:00:25.941 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:25.941 ++ LOGO=fedora-logo-icon 00:00:25.941 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:00:25.941 ++ HOME_URL=https://fedoraproject.org/ 00:00:25.941 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:00:25.941 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:25.941 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:25.941 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:25.941 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:00:25.941 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:25.941 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:00:25.941 ++ SUPPORT_END=2024-05-14 00:00:25.941 ++ VARIANT='Cloud Edition' 00:00:25.941 ++ VARIANT_ID=cloud 00:00:25.941 + uname -a 00:00:25.941 Linux spdk-fcp-10 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:00:25.941 + sudo /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:00:28.486 Hugepages 00:00:28.486 node hugesize free / total 00:00:28.486 node0 1048576kB 0 / 0 00:00:28.486 node0 2048kB 0 / 0 00:00:28.486 node1 1048576kB 0 / 0 00:00:28.486 node1 2048kB 0 / 0 00:00:28.486 00:00:28.487 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:28.487 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:00:28.487 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:00:28.487 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:00:28.487 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:00:28.487 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:00:28.487 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:00:28.487 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:00:28.487 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:00:28.487 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:00:28.487 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:00:28.487 NVMe 0000:cb:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:00:28.487 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:00:28.487 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:00:28.487 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:00:28.487 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:00:28.487 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:00:28.487 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:00:28.487 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:00:28.487 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:00:28.487 + rm -f /tmp/spdk-ld-path 00:00:28.487 + source autorun-spdk.conf 00:00:28.487 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.487 ++ SPDK_TEST_ACCEL_DSA=1 00:00:28.487 ++ SPDK_TEST_ACCEL_IAA=1 00:00:28.487 ++ SPDK_TEST_NVMF=1 00:00:28.487 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.487 ++ SPDK_RUN_ASAN=1 00:00:28.487 ++ SPDK_RUN_UBSAN=1 00:00:28.487 ++ RUN_NIGHTLY=0 00:00:28.487 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:28.487 + [[ -n '' ]] 00:00:28.487 + sudo git config --global --add safe.directory /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:28.487 + for M in /var/spdk/build-*-manifest.txt 00:00:28.487 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:28.487 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:28.487 + for M in /var/spdk/build-*-manifest.txt 00:00:28.487 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:28.487 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/dsa-phy-autotest/output/ 00:00:28.487 ++ uname 00:00:28.487 + [[ Linux == \L\i\n\u\x ]] 00:00:28.487 + sudo dmesg -T 00:00:28.487 + sudo dmesg --clear 00:00:28.748 + dmesg_pid=2422799 00:00:28.748 + [[ Fedora Linux == FreeBSD ]] 00:00:28.748 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.748 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:28.748 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:28.748 + [[ -x /usr/src/fio-static/fio ]] 00:00:28.748 + export FIO_BIN=/usr/src/fio-static/fio 00:00:28.748 + FIO_BIN=/usr/src/fio-static/fio 00:00:28.748 + sudo dmesg -Tw 00:00:28.748 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\d\s\a\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:28.748 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:28.748 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:28.748 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.748 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:28.748 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:28.748 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.748 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:28.748 + spdk/autorun.sh /var/jenkins/workspace/dsa-phy-autotest/autorun-spdk.conf 00:00:28.748 Test configuration: 00:00:28.748 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:28.748 SPDK_TEST_ACCEL_DSA=1 00:00:28.748 SPDK_TEST_ACCEL_IAA=1 00:00:28.748 SPDK_TEST_NVMF=1 00:00:28.748 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:28.748 SPDK_RUN_ASAN=1 00:00:28.748 SPDK_RUN_UBSAN=1 00:00:28.748 RUN_NIGHTLY=0 00:34:21 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:00:28.748 00:34:21 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:28.748 00:34:21 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:28.748 00:34:21 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:28.748 00:34:21 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.748 00:34:21 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.748 00:34:21 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.748 00:34:21 -- paths/export.sh@5 -- $ export PATH 00:00:28.748 00:34:21 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:28.748 00:34:21 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:00:28.748 00:34:21 -- common/autobuild_common.sh@435 -- $ date +%s 00:00:28.748 00:34:21 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714170861.XXXXXX 00:00:28.748 00:34:21 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714170861.zb5nvK 00:00:28.748 00:34:21 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:00:28.748 00:34:21 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:00:28.748 00:34:21 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:00:28.748 00:34:21 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:28.748 00:34:21 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:28.748 00:34:21 -- common/autobuild_common.sh@451 -- $ get_config_params 00:00:28.748 00:34:21 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:00:28.748 00:34:21 -- common/autotest_common.sh@10 -- $ set +x 00:00:28.748 00:34:21 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:00:28.748 00:34:21 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:00:28.748 00:34:21 -- pm/common@17 -- $ local monitor 00:00:28.748 00:34:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.748 00:34:21 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2422833 00:00:28.748 00:34:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.748 00:34:21 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2422835 00:00:28.748 00:34:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.748 00:34:21 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2422836 00:00:28.748 00:34:21 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:28.748 00:34:21 -- pm/common@21 -- $ date +%s 00:00:28.749 00:34:21 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2422838 00:00:28.749 00:34:21 -- pm/common@26 -- $ sleep 1 00:00:28.749 00:34:21 -- pm/common@21 -- $ date +%s 00:00:28.749 00:34:21 -- pm/common@21 -- $ date +%s 00:00:28.749 00:34:21 -- pm/common@21 -- $ date +%s 00:00:28.749 00:34:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170861 00:00:28.749 00:34:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170861 00:00:28.749 00:34:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170861 00:00:28.749 00:34:21 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1714170861 00:00:28.749 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170861_collect-vmstat.pm.log 00:00:28.749 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170861_collect-bmc-pm.bmc.pm.log 00:00:28.749 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170861_collect-cpu-temp.pm.log 00:00:28.749 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1714170861_collect-cpu-load.pm.log 00:00:29.690 00:34:22 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:00:29.690 00:34:22 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:29.690 00:34:22 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:29.690 00:34:22 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:00:29.690 00:34:22 -- spdk/autobuild.sh@16 -- $ date -u 00:00:29.690 Fri Apr 26 10:34:22 PM UTC 2024 00:00:29.690 00:34:22 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:29.690 v24.05-pre-450-gd4fbb5733 00:00:29.690 00:34:22 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:00:29.690 00:34:22 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:00:29.690 00:34:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:29.690 00:34:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:29.690 00:34:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.950 ************************************ 00:00:29.950 START TEST asan 00:00:29.950 ************************************ 00:00:29.950 00:34:22 -- common/autotest_common.sh@1111 -- $ echo 'using asan' 00:00:29.950 using asan 00:00:29.950 00:00:29.950 real 0m0.000s 00:00:29.950 user 0m0.000s 00:00:29.950 sys 0m0.000s 00:00:29.950 00:34:22 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:29.950 00:34:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.950 ************************************ 00:00:29.950 END TEST asan 00:00:29.950 ************************************ 00:00:29.950 00:34:22 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:29.950 00:34:22 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:29.950 00:34:22 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:29.950 00:34:22 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:29.950 00:34:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.950 ************************************ 00:00:29.950 START TEST ubsan 00:00:29.950 ************************************ 00:00:29.951 00:34:22 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:00:29.951 using ubsan 00:00:29.951 00:00:29.951 real 0m0.000s 00:00:29.951 user 0m0.000s 00:00:29.951 sys 0m0.000s 00:00:29.951 00:34:22 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:00:29.951 00:34:22 -- common/autotest_common.sh@10 -- $ set +x 00:00:29.951 ************************************ 00:00:29.951 END TEST ubsan 00:00:29.951 ************************************ 00:00:29.951 00:34:22 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:29.951 00:34:22 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:29.951 00:34:22 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:29.951 00:34:22 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/dsa-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-shared 00:00:29.951 Using default SPDK env in /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:00:29.951 Using default DPDK in /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:00:30.211 Using 'verbs' RDMA provider 00:00:43.374 Configuring ISA-L (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal.log)...done. 00:00:53.441 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/dsa-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:00:53.702 Creating mk/config.mk...done. 00:00:53.702 Creating mk/cc.flags.mk...done. 00:00:53.702 Type 'make' to build. 00:00:53.702 00:34:46 -- spdk/autobuild.sh@69 -- $ run_test make make -j128 00:00:53.702 00:34:46 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:00:53.702 00:34:46 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:00:53.702 00:34:46 -- common/autotest_common.sh@10 -- $ set +x 00:00:53.962 ************************************ 00:00:53.962 START TEST make 00:00:53.962 ************************************ 00:00:53.962 00:34:46 -- common/autotest_common.sh@1111 -- $ make -j128 00:00:54.221 make[1]: Nothing to be done for 'all'. 00:01:00.801 The Meson build system 00:01:00.801 Version: 1.3.1 00:01:00.801 Source dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk 00:01:00.801 Build dir: /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp 00:01:00.801 Build type: native build 00:01:00.801 Program cat found: YES (/usr/bin/cat) 00:01:00.801 Project name: DPDK 00:01:00.801 Project version: 23.11.0 00:01:00.801 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:00.801 C linker for the host machine: cc ld.bfd 2.39-16 00:01:00.801 Host machine cpu family: x86_64 00:01:00.801 Host machine cpu: x86_64 00:01:00.801 Message: ## Building in Developer Mode ## 00:01:00.801 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:00.801 Program check-symbols.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:00.801 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:00.801 Program python3 found: YES (/usr/bin/python3) 00:01:00.801 Program cat found: YES (/usr/bin/cat) 00:01:00.801 Compiler for C supports arguments -march=native: YES 00:01:00.801 Checking for size of "void *" : 8 00:01:00.801 Checking for size of "void *" : 8 (cached) 00:01:00.801 Library m found: YES 00:01:00.801 Library numa found: YES 00:01:00.801 Has header "numaif.h" : YES 00:01:00.801 Library fdt found: NO 00:01:00.801 Library execinfo found: NO 00:01:00.801 Has header "execinfo.h" : YES 00:01:00.801 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:00.801 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:00.801 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:00.801 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:00.801 Run-time dependency openssl found: YES 3.0.9 00:01:00.801 Run-time dependency libpcap found: YES 1.10.4 00:01:00.801 Has header "pcap.h" with dependency libpcap: YES 00:01:00.801 Compiler for C supports arguments -Wcast-qual: YES 00:01:00.801 Compiler for C supports arguments -Wdeprecated: YES 00:01:00.801 Compiler for C supports arguments -Wformat: YES 00:01:00.801 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:00.801 Compiler for C supports arguments -Wformat-security: NO 00:01:00.801 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:00.801 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:00.801 Compiler for C supports arguments -Wnested-externs: YES 00:01:00.801 Compiler for C supports arguments -Wold-style-definition: YES 00:01:00.801 Compiler for C supports arguments -Wpointer-arith: YES 00:01:00.801 Compiler for C supports arguments -Wsign-compare: YES 00:01:00.801 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:00.801 Compiler for C supports arguments -Wundef: YES 00:01:00.801 Compiler for C supports arguments -Wwrite-strings: YES 00:01:00.801 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:00.801 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:00.801 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:00.801 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:00.802 Program objdump found: YES (/usr/bin/objdump) 00:01:00.802 Compiler for C supports arguments -mavx512f: YES 00:01:00.802 Checking if "AVX512 checking" compiles: YES 00:01:00.802 Fetching value of define "__SSE4_2__" : 1 00:01:00.802 Fetching value of define "__AES__" : 1 00:01:00.802 Fetching value of define "__AVX__" : 1 00:01:00.802 Fetching value of define "__AVX2__" : 1 00:01:00.802 Fetching value of define "__AVX512BW__" : 1 00:01:00.802 Fetching value of define "__AVX512CD__" : 1 00:01:00.802 Fetching value of define "__AVX512DQ__" : 1 00:01:00.802 Fetching value of define "__AVX512F__" : 1 00:01:00.802 Fetching value of define "__AVX512VL__" : 1 00:01:00.802 Fetching value of define "__PCLMUL__" : 1 00:01:00.802 Fetching value of define "__RDRND__" : 1 00:01:00.802 Fetching value of define "__RDSEED__" : 1 00:01:00.802 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:00.802 Fetching value of define "__znver1__" : (undefined) 00:01:00.802 Fetching value of define "__znver2__" : (undefined) 00:01:00.802 Fetching value of define "__znver3__" : (undefined) 00:01:00.802 Fetching value of define "__znver4__" : (undefined) 00:01:00.802 Library asan found: YES 00:01:00.802 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:00.802 Message: lib/log: Defining dependency "log" 00:01:00.802 Message: lib/kvargs: Defining dependency "kvargs" 00:01:00.802 Message: lib/telemetry: Defining dependency "telemetry" 00:01:00.802 Library rt found: YES 00:01:00.802 Checking for function "getentropy" : NO 00:01:00.802 Message: lib/eal: Defining dependency "eal" 00:01:00.802 Message: lib/ring: Defining dependency "ring" 00:01:00.802 Message: lib/rcu: Defining dependency "rcu" 00:01:00.802 Message: lib/mempool: Defining dependency "mempool" 00:01:00.802 Message: lib/mbuf: Defining dependency "mbuf" 00:01:00.802 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:00.802 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:00.802 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:00.802 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:00.802 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:00.802 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:00.802 Compiler for C supports arguments -mpclmul: YES 00:01:00.802 Compiler for C supports arguments -maes: YES 00:01:00.802 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:00.802 Compiler for C supports arguments -mavx512bw: YES 00:01:00.802 Compiler for C supports arguments -mavx512dq: YES 00:01:00.802 Compiler for C supports arguments -mavx512vl: YES 00:01:00.802 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:00.802 Compiler for C supports arguments -mavx2: YES 00:01:00.802 Compiler for C supports arguments -mavx: YES 00:01:00.802 Message: lib/net: Defining dependency "net" 00:01:00.802 Message: lib/meter: Defining dependency "meter" 00:01:00.802 Message: lib/ethdev: Defining dependency "ethdev" 00:01:00.802 Message: lib/pci: Defining dependency "pci" 00:01:00.802 Message: lib/cmdline: Defining dependency "cmdline" 00:01:00.802 Message: lib/hash: Defining dependency "hash" 00:01:00.802 Message: lib/timer: Defining dependency "timer" 00:01:00.802 Message: lib/compressdev: Defining dependency "compressdev" 00:01:00.802 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:00.802 Message: lib/dmadev: Defining dependency "dmadev" 00:01:00.802 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:00.802 Message: lib/power: Defining dependency "power" 00:01:00.802 Message: lib/reorder: Defining dependency "reorder" 00:01:00.802 Message: lib/security: Defining dependency "security" 00:01:00.802 Has header "linux/userfaultfd.h" : YES 00:01:00.802 Has header "linux/vduse.h" : YES 00:01:00.802 Message: lib/vhost: Defining dependency "vhost" 00:01:00.802 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:00.802 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:00.802 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:00.802 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:00.802 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:00.802 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:00.802 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:00.802 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:00.802 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:00.802 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:00.802 Program doxygen found: YES (/usr/bin/doxygen) 00:01:00.802 Configuring doxy-api-html.conf using configuration 00:01:00.802 Configuring doxy-api-man.conf using configuration 00:01:00.802 Program mandb found: YES (/usr/bin/mandb) 00:01:00.802 Program sphinx-build found: NO 00:01:00.802 Configuring rte_build_config.h using configuration 00:01:00.802 Message: 00:01:00.802 ================= 00:01:00.802 Applications Enabled 00:01:00.802 ================= 00:01:00.802 00:01:00.802 apps: 00:01:00.802 00:01:00.802 00:01:00.802 Message: 00:01:00.802 ================= 00:01:00.802 Libraries Enabled 00:01:00.802 ================= 00:01:00.802 00:01:00.802 libs: 00:01:00.802 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:00.802 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:00.802 cryptodev, dmadev, power, reorder, security, vhost, 00:01:00.802 00:01:00.802 Message: 00:01:00.802 =============== 00:01:00.802 Drivers Enabled 00:01:00.802 =============== 00:01:00.802 00:01:00.802 common: 00:01:00.802 00:01:00.802 bus: 00:01:00.802 pci, vdev, 00:01:00.802 mempool: 00:01:00.802 ring, 00:01:00.802 dma: 00:01:00.802 00:01:00.802 net: 00:01:00.802 00:01:00.802 crypto: 00:01:00.802 00:01:00.802 compress: 00:01:00.802 00:01:00.802 vdpa: 00:01:00.802 00:01:00.802 00:01:00.802 Message: 00:01:00.802 ================= 00:01:00.802 Content Skipped 00:01:00.802 ================= 00:01:00.802 00:01:00.802 apps: 00:01:00.802 dumpcap: explicitly disabled via build config 00:01:00.802 graph: explicitly disabled via build config 00:01:00.802 pdump: explicitly disabled via build config 00:01:00.802 proc-info: explicitly disabled via build config 00:01:00.802 test-acl: explicitly disabled via build config 00:01:00.802 test-bbdev: explicitly disabled via build config 00:01:00.802 test-cmdline: explicitly disabled via build config 00:01:00.802 test-compress-perf: explicitly disabled via build config 00:01:00.802 test-crypto-perf: explicitly disabled via build config 00:01:00.802 test-dma-perf: explicitly disabled via build config 00:01:00.802 test-eventdev: explicitly disabled via build config 00:01:00.802 test-fib: explicitly disabled via build config 00:01:00.802 test-flow-perf: explicitly disabled via build config 00:01:00.802 test-gpudev: explicitly disabled via build config 00:01:00.802 test-mldev: explicitly disabled via build config 00:01:00.802 test-pipeline: explicitly disabled via build config 00:01:00.802 test-pmd: explicitly disabled via build config 00:01:00.802 test-regex: explicitly disabled via build config 00:01:00.802 test-sad: explicitly disabled via build config 00:01:00.802 test-security-perf: explicitly disabled via build config 00:01:00.802 00:01:00.802 libs: 00:01:00.802 metrics: explicitly disabled via build config 00:01:00.802 acl: explicitly disabled via build config 00:01:00.802 bbdev: explicitly disabled via build config 00:01:00.802 bitratestats: explicitly disabled via build config 00:01:00.802 bpf: explicitly disabled via build config 00:01:00.802 cfgfile: explicitly disabled via build config 00:01:00.802 distributor: explicitly disabled via build config 00:01:00.802 efd: explicitly disabled via build config 00:01:00.802 eventdev: explicitly disabled via build config 00:01:00.802 dispatcher: explicitly disabled via build config 00:01:00.802 gpudev: explicitly disabled via build config 00:01:00.802 gro: explicitly disabled via build config 00:01:00.802 gso: explicitly disabled via build config 00:01:00.802 ip_frag: explicitly disabled via build config 00:01:00.802 jobstats: explicitly disabled via build config 00:01:00.802 latencystats: explicitly disabled via build config 00:01:00.803 lpm: explicitly disabled via build config 00:01:00.803 member: explicitly disabled via build config 00:01:00.803 pcapng: explicitly disabled via build config 00:01:00.803 rawdev: explicitly disabled via build config 00:01:00.803 regexdev: explicitly disabled via build config 00:01:00.803 mldev: explicitly disabled via build config 00:01:00.803 rib: explicitly disabled via build config 00:01:00.803 sched: explicitly disabled via build config 00:01:00.803 stack: explicitly disabled via build config 00:01:00.803 ipsec: explicitly disabled via build config 00:01:00.803 pdcp: explicitly disabled via build config 00:01:00.803 fib: explicitly disabled via build config 00:01:00.803 port: explicitly disabled via build config 00:01:00.803 pdump: explicitly disabled via build config 00:01:00.803 table: explicitly disabled via build config 00:01:00.803 pipeline: explicitly disabled via build config 00:01:00.803 graph: explicitly disabled via build config 00:01:00.803 node: explicitly disabled via build config 00:01:00.803 00:01:00.803 drivers: 00:01:00.803 common/cpt: not in enabled drivers build config 00:01:00.803 common/dpaax: not in enabled drivers build config 00:01:00.803 common/iavf: not in enabled drivers build config 00:01:00.803 common/idpf: not in enabled drivers build config 00:01:00.803 common/mvep: not in enabled drivers build config 00:01:00.803 common/octeontx: not in enabled drivers build config 00:01:00.803 bus/auxiliary: not in enabled drivers build config 00:01:00.803 bus/cdx: not in enabled drivers build config 00:01:00.803 bus/dpaa: not in enabled drivers build config 00:01:00.803 bus/fslmc: not in enabled drivers build config 00:01:00.803 bus/ifpga: not in enabled drivers build config 00:01:00.803 bus/platform: not in enabled drivers build config 00:01:00.803 bus/vmbus: not in enabled drivers build config 00:01:00.803 common/cnxk: not in enabled drivers build config 00:01:00.803 common/mlx5: not in enabled drivers build config 00:01:00.803 common/nfp: not in enabled drivers build config 00:01:00.803 common/qat: not in enabled drivers build config 00:01:00.803 common/sfc_efx: not in enabled drivers build config 00:01:00.803 mempool/bucket: not in enabled drivers build config 00:01:00.803 mempool/cnxk: not in enabled drivers build config 00:01:00.803 mempool/dpaa: not in enabled drivers build config 00:01:00.803 mempool/dpaa2: not in enabled drivers build config 00:01:00.803 mempool/octeontx: not in enabled drivers build config 00:01:00.803 mempool/stack: not in enabled drivers build config 00:01:00.803 dma/cnxk: not in enabled drivers build config 00:01:00.803 dma/dpaa: not in enabled drivers build config 00:01:00.803 dma/dpaa2: not in enabled drivers build config 00:01:00.803 dma/hisilicon: not in enabled drivers build config 00:01:00.803 dma/idxd: not in enabled drivers build config 00:01:00.803 dma/ioat: not in enabled drivers build config 00:01:00.803 dma/skeleton: not in enabled drivers build config 00:01:00.803 net/af_packet: not in enabled drivers build config 00:01:00.803 net/af_xdp: not in enabled drivers build config 00:01:00.803 net/ark: not in enabled drivers build config 00:01:00.803 net/atlantic: not in enabled drivers build config 00:01:00.803 net/avp: not in enabled drivers build config 00:01:00.803 net/axgbe: not in enabled drivers build config 00:01:00.803 net/bnx2x: not in enabled drivers build config 00:01:00.803 net/bnxt: not in enabled drivers build config 00:01:00.803 net/bonding: not in enabled drivers build config 00:01:00.803 net/cnxk: not in enabled drivers build config 00:01:00.803 net/cpfl: not in enabled drivers build config 00:01:00.803 net/cxgbe: not in enabled drivers build config 00:01:00.803 net/dpaa: not in enabled drivers build config 00:01:00.803 net/dpaa2: not in enabled drivers build config 00:01:00.803 net/e1000: not in enabled drivers build config 00:01:00.803 net/ena: not in enabled drivers build config 00:01:00.803 net/enetc: not in enabled drivers build config 00:01:00.803 net/enetfec: not in enabled drivers build config 00:01:00.803 net/enic: not in enabled drivers build config 00:01:00.803 net/failsafe: not in enabled drivers build config 00:01:00.803 net/fm10k: not in enabled drivers build config 00:01:00.803 net/gve: not in enabled drivers build config 00:01:00.803 net/hinic: not in enabled drivers build config 00:01:00.803 net/hns3: not in enabled drivers build config 00:01:00.803 net/i40e: not in enabled drivers build config 00:01:00.803 net/iavf: not in enabled drivers build config 00:01:00.803 net/ice: not in enabled drivers build config 00:01:00.803 net/idpf: not in enabled drivers build config 00:01:00.803 net/igc: not in enabled drivers build config 00:01:00.803 net/ionic: not in enabled drivers build config 00:01:00.803 net/ipn3ke: not in enabled drivers build config 00:01:00.803 net/ixgbe: not in enabled drivers build config 00:01:00.803 net/mana: not in enabled drivers build config 00:01:00.803 net/memif: not in enabled drivers build config 00:01:00.803 net/mlx4: not in enabled drivers build config 00:01:00.803 net/mlx5: not in enabled drivers build config 00:01:00.803 net/mvneta: not in enabled drivers build config 00:01:00.803 net/mvpp2: not in enabled drivers build config 00:01:00.803 net/netvsc: not in enabled drivers build config 00:01:00.803 net/nfb: not in enabled drivers build config 00:01:00.803 net/nfp: not in enabled drivers build config 00:01:00.803 net/ngbe: not in enabled drivers build config 00:01:00.803 net/null: not in enabled drivers build config 00:01:00.803 net/octeontx: not in enabled drivers build config 00:01:00.803 net/octeon_ep: not in enabled drivers build config 00:01:00.803 net/pcap: not in enabled drivers build config 00:01:00.803 net/pfe: not in enabled drivers build config 00:01:00.803 net/qede: not in enabled drivers build config 00:01:00.803 net/ring: not in enabled drivers build config 00:01:00.803 net/sfc: not in enabled drivers build config 00:01:00.803 net/softnic: not in enabled drivers build config 00:01:00.803 net/tap: not in enabled drivers build config 00:01:00.803 net/thunderx: not in enabled drivers build config 00:01:00.803 net/txgbe: not in enabled drivers build config 00:01:00.803 net/vdev_netvsc: not in enabled drivers build config 00:01:00.803 net/vhost: not in enabled drivers build config 00:01:00.803 net/virtio: not in enabled drivers build config 00:01:00.803 net/vmxnet3: not in enabled drivers build config 00:01:00.803 raw/*: missing internal dependency, "rawdev" 00:01:00.803 crypto/armv8: not in enabled drivers build config 00:01:00.803 crypto/bcmfs: not in enabled drivers build config 00:01:00.803 crypto/caam_jr: not in enabled drivers build config 00:01:00.803 crypto/ccp: not in enabled drivers build config 00:01:00.803 crypto/cnxk: not in enabled drivers build config 00:01:00.803 crypto/dpaa_sec: not in enabled drivers build config 00:01:00.803 crypto/dpaa2_sec: not in enabled drivers build config 00:01:00.803 crypto/ipsec_mb: not in enabled drivers build config 00:01:00.803 crypto/mlx5: not in enabled drivers build config 00:01:00.803 crypto/mvsam: not in enabled drivers build config 00:01:00.803 crypto/nitrox: not in enabled drivers build config 00:01:00.803 crypto/null: not in enabled drivers build config 00:01:00.803 crypto/octeontx: not in enabled drivers build config 00:01:00.803 crypto/openssl: not in enabled drivers build config 00:01:00.803 crypto/scheduler: not in enabled drivers build config 00:01:00.803 crypto/uadk: not in enabled drivers build config 00:01:00.803 crypto/virtio: not in enabled drivers build config 00:01:00.803 compress/isal: not in enabled drivers build config 00:01:00.803 compress/mlx5: not in enabled drivers build config 00:01:00.803 compress/octeontx: not in enabled drivers build config 00:01:00.803 compress/zlib: not in enabled drivers build config 00:01:00.803 regex/*: missing internal dependency, "regexdev" 00:01:00.803 ml/*: missing internal dependency, "mldev" 00:01:00.803 vdpa/ifc: not in enabled drivers build config 00:01:00.804 vdpa/mlx5: not in enabled drivers build config 00:01:00.804 vdpa/nfp: not in enabled drivers build config 00:01:00.804 vdpa/sfc: not in enabled drivers build config 00:01:00.804 event/*: missing internal dependency, "eventdev" 00:01:00.804 baseband/*: missing internal dependency, "bbdev" 00:01:00.804 gpu/*: missing internal dependency, "gpudev" 00:01:00.804 00:01:00.804 00:01:00.804 Build targets in project: 84 00:01:00.804 00:01:00.804 DPDK 23.11.0 00:01:00.804 00:01:00.804 User defined options 00:01:00.804 buildtype : debug 00:01:00.804 default_library : shared 00:01:00.804 libdir : lib 00:01:00.804 prefix : /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:01:00.804 b_sanitize : address 00:01:00.804 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:00.804 c_link_args : 00:01:00.804 cpu_instruction_set: native 00:01:00.804 disable_apps : test-acl,test-bbdev,test-crypto-perf,test-fib,test-pipeline,test-gpudev,test-flow-perf,pdump,dumpcap,test-sad,test-cmdline,test-eventdev,proc-info,test,test-dma-perf,test-pmd,test-mldev,test-compress-perf,test-security-perf,graph,test-regex 00:01:00.804 disable_libs : pipeline,member,eventdev,efd,bbdev,cfgfile,rib,sched,mldev,metrics,lpm,latencystats,pdump,pdcp,bpf,ipsec,fib,ip_frag,table,port,stack,gro,jobstats,regexdev,rawdev,pcapng,dispatcher,node,bitratestats,acl,gpudev,distributor,graph,gso 00:01:00.804 enable_docs : false 00:01:00.804 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:00.804 enable_kmods : false 00:01:00.804 tests : false 00:01:00.804 00:01:00.804 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:00.804 ninja: Entering directory `/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp' 00:01:00.804 [1/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:00.804 [2/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:00.804 [3/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:00.804 [4/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:00.804 [5/264] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:00.804 [6/264] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:00.804 [7/264] Linking static target lib/librte_kvargs.a 00:01:00.804 [8/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:00.804 [9/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:00.804 [10/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:00.804 [11/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:00.804 [12/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:00.804 [13/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:00.804 [14/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:00.804 [15/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:00.804 [16/264] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:00.804 [17/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:00.804 [18/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:00.804 [19/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:00.804 [20/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:00.804 [21/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:00.804 [22/264] Linking static target lib/librte_log.a 00:01:00.804 [23/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:00.804 [24/264] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:00.804 [25/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:00.804 [26/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:00.804 [27/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:01.063 [28/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:01.063 [29/264] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:01.063 [30/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:01.063 [31/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:01.063 [32/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:01.063 [33/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:01.063 [34/264] Linking static target lib/librte_pci.a 00:01:01.063 [35/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:01.063 [36/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:01.063 [37/264] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:01.063 [38/264] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:01.063 [39/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:01.063 [40/264] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:01.063 [41/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:01.324 [42/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:01.324 [43/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:01.324 [44/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:01.324 [45/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:01.324 [46/264] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:01.324 [47/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:01.324 [48/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:01.324 [49/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:01.324 [50/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:01.324 [51/264] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:01.324 [52/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:01.324 [53/264] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:01.324 [54/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:01.324 [55/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:01.324 [56/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:01.324 [57/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:01.324 [58/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:01.324 [59/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:01.324 [60/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:01.324 [61/264] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:01.324 [62/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:01.324 [63/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:01.324 [64/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:01.324 [65/264] Linking static target lib/librte_meter.a 00:01:01.324 [66/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:01.324 [67/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:01.324 [68/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:01.324 [69/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:01.324 [70/264] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.324 [71/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:01.324 [72/264] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:01.324 [73/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:01.324 [74/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:01.324 [75/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:01.324 [76/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:01.324 [77/264] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.324 [78/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:01.324 [79/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:01.324 [80/264] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:01.324 [81/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:01.324 [82/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:01.324 [83/264] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:01.324 [84/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:01.324 [85/264] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:01.324 [86/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:01.324 [87/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:01.324 [88/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:01.324 [89/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:01.324 [90/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:01.324 [91/264] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:01.324 [92/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:01.324 [93/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:01.324 [94/264] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:01.324 [95/264] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:01.324 [96/264] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:01.324 [97/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:01.324 [98/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:01.324 [99/264] Linking static target lib/librte_ring.a 00:01:01.324 [100/264] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:01.324 [101/264] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:01.324 [102/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:01.324 [103/264] Linking static target lib/librte_cmdline.a 00:01:01.324 [104/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:01.324 [105/264] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:01.324 [106/264] Linking static target lib/librte_telemetry.a 00:01:01.324 [107/264] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:01.324 [108/264] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:01.324 [109/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:01.324 [110/264] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:01.324 [111/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:01.324 [112/264] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:01.324 [113/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:01.324 [114/264] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:01.324 [115/264] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:01.324 [116/264] Linking static target lib/librte_timer.a 00:01:01.324 [117/264] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:01.324 [118/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:01.324 [119/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:01.324 [120/264] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:01.324 [121/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:01.324 [122/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:01.325 [123/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:01.581 [124/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:01.581 [125/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:01.581 [126/264] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:01.581 [127/264] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:01.581 [128/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:01.581 [129/264] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.581 [130/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:01.581 [131/264] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:01.581 [132/264] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.581 [133/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:01.581 [134/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:01.581 [135/264] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:01.581 [136/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:01.581 [137/264] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:01.581 [138/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:01.581 [139/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:01.581 [140/264] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:01.581 [141/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:01.581 [142/264] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:01.581 [143/264] Linking static target lib/librte_dmadev.a 00:01:01.581 [144/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:01.581 [145/264] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:01.582 [146/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:01.582 [147/264] Linking target lib/librte_log.so.24.0 00:01:01.582 [148/264] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:01.582 [149/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:01.582 [150/264] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:01.582 [151/264] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:01.582 [152/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:01.582 [153/264] Linking static target lib/librte_compressdev.a 00:01:01.582 [154/264] Linking static target lib/librte_mempool.a 00:01:01.582 [155/264] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:01.582 [156/264] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:01.582 [157/264] Linking static target lib/librte_power.a 00:01:01.582 [158/264] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:01.582 [159/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:01.582 [160/264] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:01.582 [161/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:01.582 [162/264] Linking static target lib/librte_net.a 00:01:01.582 [163/264] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:01.582 [164/264] Linking static target lib/librte_eal.a 00:01:01.582 [165/264] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.582 [166/264] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:01.582 [167/264] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:01.582 [168/264] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:01.582 [169/264] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:01.582 [170/264] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:01.582 [171/264] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.582 [172/264] Linking static target lib/librte_rcu.a 00:01:01.582 [173/264] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:01.582 [174/264] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:01.582 [175/264] Linking static target drivers/librte_bus_vdev.a 00:01:01.582 [176/264] Linking target lib/librte_kvargs.so.24.0 00:01:01.582 [177/264] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:01.582 [178/264] Linking static target lib/librte_reorder.a 00:01:01.582 [179/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:01.582 [180/264] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [181/264] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [182/264] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:01.839 [183/264] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:01.839 [184/264] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [185/264] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:01.839 [186/264] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:01.839 [187/264] Linking target lib/librte_telemetry.so.24.0 00:01:01.839 [188/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:01.839 [189/264] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:01.839 [190/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:01.839 [191/264] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:01.839 [192/264] Linking static target lib/librte_security.a 00:01:01.839 [193/264] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [194/264] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.839 [195/264] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:01.839 [196/264] Linking static target drivers/librte_bus_pci.a 00:01:01.839 [197/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:01.839 [198/264] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [199/264] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [200/264] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:01.839 [201/264] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:01.839 [202/264] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [203/264] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.839 [204/264] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:01.839 [205/264] Linking static target drivers/librte_mempool_ring.a 00:01:01.839 [206/264] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:01.839 [207/264] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:01.839 [208/264] Linking static target lib/librte_mbuf.a 00:01:01.839 [209/264] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:01.839 [210/264] Linking static target lib/librte_hash.a 00:01:02.097 [211/264] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.097 [212/264] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.097 [213/264] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.097 [214/264] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.097 [215/264] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:02.097 [216/264] Linking static target lib/librte_cryptodev.a 00:01:02.097 [217/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:02.097 [218/264] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.355 [219/264] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.355 [220/264] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:02.920 [221/264] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:02.920 [222/264] Linking static target lib/librte_ethdev.a 00:01:03.178 [223/264] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:03.436 [224/264] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:05.962 [225/264] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:05.962 [226/264] Linking static target lib/librte_vhost.a 00:01:06.895 [227/264] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:07.829 [228/264] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.087 [229/264] Linking target lib/librte_eal.so.24.0 00:01:08.087 [230/264] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:08.087 [231/264] Linking target lib/librte_meter.so.24.0 00:01:08.087 [232/264] Linking target lib/librte_ring.so.24.0 00:01:08.087 [233/264] Linking target lib/librte_pci.so.24.0 00:01:08.087 [234/264] Linking target lib/librte_dmadev.so.24.0 00:01:08.087 [235/264] Linking target lib/librte_timer.so.24.0 00:01:08.087 [236/264] Linking target drivers/librte_bus_vdev.so.24.0 00:01:08.087 [237/264] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:08.087 [238/264] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:08.087 [239/264] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:08.087 [240/264] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:08.087 [241/264] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:08.345 [242/264] Linking target drivers/librte_bus_pci.so.24.0 00:01:08.345 [243/264] Linking target lib/librte_rcu.so.24.0 00:01:08.345 [244/264] Linking target lib/librte_mempool.so.24.0 00:01:08.345 [245/264] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:08.345 [246/264] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:08.345 [247/264] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:08.345 [248/264] Linking target drivers/librte_mempool_ring.so.24.0 00:01:08.345 [249/264] Linking target lib/librte_mbuf.so.24.0 00:01:08.345 [250/264] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:08.345 [251/264] Linking target lib/librte_compressdev.so.24.0 00:01:08.345 [252/264] Linking target lib/librte_cryptodev.so.24.0 00:01:08.604 [253/264] Linking target lib/librte_net.so.24.0 00:01:08.604 [254/264] Linking target lib/librte_reorder.so.24.0 00:01:08.604 [255/264] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:08.604 [256/264] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:08.604 [257/264] Linking target lib/librte_cmdline.so.24.0 00:01:08.604 [258/264] Linking target lib/librte_security.so.24.0 00:01:08.604 [259/264] Linking target lib/librte_hash.so.24.0 00:01:08.604 [260/264] Linking target lib/librte_ethdev.so.24.0 00:01:08.604 [261/264] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:08.604 [262/264] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:08.604 [263/264] Linking target lib/librte_power.so.24.0 00:01:08.864 [264/264] Linking target lib/librte_vhost.so.24.0 00:01:08.864 INFO: autodetecting backend as ninja 00:01:08.864 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build-tmp -j 128 00:01:09.431 CC lib/log/log.o 00:01:09.431 CC lib/log/log_flags.o 00:01:09.431 CC lib/log/log_deprecated.o 00:01:09.431 CC lib/ut_mock/mock.o 00:01:09.689 CC lib/ut/ut.o 00:01:09.689 LIB libspdk_ut_mock.a 00:01:09.689 SO libspdk_ut_mock.so.6.0 00:01:09.689 LIB libspdk_ut.a 00:01:09.689 LIB libspdk_log.a 00:01:09.689 SO libspdk_ut.so.2.0 00:01:09.689 SO libspdk_log.so.7.0 00:01:09.689 SYMLINK libspdk_ut_mock.so 00:01:09.689 SYMLINK libspdk_ut.so 00:01:09.689 SYMLINK libspdk_log.so 00:01:09.947 CC lib/dma/dma.o 00:01:09.947 CXX lib/trace_parser/trace.o 00:01:09.947 CC lib/ioat/ioat.o 00:01:09.947 CC lib/util/base64.o 00:01:09.947 CC lib/util/bit_array.o 00:01:09.947 CC lib/util/cpuset.o 00:01:09.947 CC lib/util/crc16.o 00:01:09.947 CC lib/util/crc32.o 00:01:09.947 CC lib/util/crc32_ieee.o 00:01:09.947 CC lib/util/crc32c.o 00:01:09.947 CC lib/util/dif.o 00:01:09.947 CC lib/util/crc64.o 00:01:09.947 CC lib/util/fd.o 00:01:09.947 CC lib/util/iov.o 00:01:09.947 CC lib/util/file.o 00:01:09.947 CC lib/util/hexlify.o 00:01:09.947 CC lib/util/math.o 00:01:09.947 CC lib/util/pipe.o 00:01:09.947 CC lib/util/strerror_tls.o 00:01:09.947 CC lib/util/uuid.o 00:01:09.947 CC lib/util/string.o 00:01:09.947 CC lib/util/fd_group.o 00:01:09.947 CC lib/util/xor.o 00:01:09.947 CC lib/util/zipf.o 00:01:10.206 CC lib/vfio_user/host/vfio_user_pci.o 00:01:10.206 CC lib/vfio_user/host/vfio_user.o 00:01:10.206 LIB libspdk_dma.a 00:01:10.206 SO libspdk_dma.so.4.0 00:01:10.206 SYMLINK libspdk_dma.so 00:01:10.206 LIB libspdk_vfio_user.a 00:01:10.206 SO libspdk_vfio_user.so.5.0 00:01:10.206 LIB libspdk_ioat.a 00:01:10.206 SO libspdk_ioat.so.7.0 00:01:10.466 SYMLINK libspdk_vfio_user.so 00:01:10.466 SYMLINK libspdk_ioat.so 00:01:10.723 LIB libspdk_util.a 00:01:10.723 SO libspdk_util.so.9.0 00:01:10.723 SYMLINK libspdk_util.so 00:01:10.981 LIB libspdk_trace_parser.a 00:01:10.981 SO libspdk_trace_parser.so.5.0 00:01:10.981 CC lib/conf/conf.o 00:01:10.981 CC lib/rdma/common.o 00:01:10.981 CC lib/rdma/rdma_verbs.o 00:01:10.981 CC lib/json/json_util.o 00:01:10.981 CC lib/json/json_parse.o 00:01:10.981 CC lib/json/json_write.o 00:01:10.981 CC lib/idxd/idxd.o 00:01:10.981 CC lib/idxd/idxd_user.o 00:01:10.981 CC lib/env_dpdk/env.o 00:01:10.981 CC lib/env_dpdk/memory.o 00:01:10.981 CC lib/env_dpdk/threads.o 00:01:10.981 CC lib/env_dpdk/pci.o 00:01:10.981 CC lib/vmd/vmd.o 00:01:10.981 CC lib/env_dpdk/init.o 00:01:10.981 CC lib/env_dpdk/pci_vmd.o 00:01:10.981 CC lib/env_dpdk/pci_ioat.o 00:01:10.981 CC lib/env_dpdk/pci_virtio.o 00:01:10.981 CC lib/vmd/led.o 00:01:10.981 CC lib/env_dpdk/pci_event.o 00:01:10.981 CC lib/env_dpdk/pci_idxd.o 00:01:10.981 CC lib/env_dpdk/pci_dpdk.o 00:01:10.981 CC lib/env_dpdk/sigbus_handler.o 00:01:10.981 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:10.981 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:10.981 SYMLINK libspdk_trace_parser.so 00:01:11.239 LIB libspdk_conf.a 00:01:11.239 SO libspdk_conf.so.6.0 00:01:11.239 LIB libspdk_rdma.a 00:01:11.239 SYMLINK libspdk_conf.so 00:01:11.239 SO libspdk_rdma.so.6.0 00:01:11.239 LIB libspdk_json.a 00:01:11.496 SO libspdk_json.so.6.0 00:01:11.496 SYMLINK libspdk_rdma.so 00:01:11.496 SYMLINK libspdk_json.so 00:01:11.496 LIB libspdk_vmd.a 00:01:11.496 SO libspdk_vmd.so.6.0 00:01:11.496 SYMLINK libspdk_vmd.so 00:01:11.753 CC lib/jsonrpc/jsonrpc_server.o 00:01:11.753 CC lib/jsonrpc/jsonrpc_client.o 00:01:11.753 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:11.753 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:11.753 LIB libspdk_idxd.a 00:01:11.753 SO libspdk_idxd.so.12.0 00:01:11.753 SYMLINK libspdk_idxd.so 00:01:11.753 LIB libspdk_jsonrpc.a 00:01:12.012 SO libspdk_jsonrpc.so.6.0 00:01:12.012 SYMLINK libspdk_jsonrpc.so 00:01:12.271 CC lib/rpc/rpc.o 00:01:12.529 LIB libspdk_rpc.a 00:01:12.529 SO libspdk_rpc.so.6.0 00:01:12.529 SYMLINK libspdk_rpc.so 00:01:12.529 LIB libspdk_env_dpdk.a 00:01:12.529 SO libspdk_env_dpdk.so.14.0 00:01:12.787 CC lib/notify/notify.o 00:01:12.787 CC lib/notify/notify_rpc.o 00:01:12.787 CC lib/trace/trace.o 00:01:12.787 CC lib/trace/trace_flags.o 00:01:12.787 CC lib/trace/trace_rpc.o 00:01:12.787 SYMLINK libspdk_env_dpdk.so 00:01:12.787 CC lib/keyring/keyring.o 00:01:12.787 CC lib/keyring/keyring_rpc.o 00:01:12.787 LIB libspdk_notify.a 00:01:12.787 SO libspdk_notify.so.6.0 00:01:12.787 LIB libspdk_keyring.a 00:01:13.044 SO libspdk_keyring.so.1.0 00:01:13.044 LIB libspdk_trace.a 00:01:13.044 SYMLINK libspdk_notify.so 00:01:13.044 SYMLINK libspdk_keyring.so 00:01:13.044 SO libspdk_trace.so.10.0 00:01:13.044 SYMLINK libspdk_trace.so 00:01:13.303 CC lib/thread/iobuf.o 00:01:13.303 CC lib/thread/thread.o 00:01:13.303 CC lib/sock/sock.o 00:01:13.303 CC lib/sock/sock_rpc.o 00:01:13.870 LIB libspdk_sock.a 00:01:13.870 SO libspdk_sock.so.9.0 00:01:13.870 SYMLINK libspdk_sock.so 00:01:14.129 CC lib/nvme/nvme_ctrlr.o 00:01:14.129 CC lib/nvme/nvme_ns.o 00:01:14.129 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:14.129 CC lib/nvme/nvme_fabric.o 00:01:14.129 CC lib/nvme/nvme_ns_cmd.o 00:01:14.129 CC lib/nvme/nvme_pcie.o 00:01:14.129 CC lib/nvme/nvme_quirks.o 00:01:14.129 CC lib/nvme/nvme_pcie_common.o 00:01:14.129 CC lib/nvme/nvme_qpair.o 00:01:14.129 CC lib/nvme/nvme_discovery.o 00:01:14.129 CC lib/nvme/nvme_transport.o 00:01:14.129 CC lib/nvme/nvme.o 00:01:14.129 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:14.129 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:14.129 CC lib/nvme/nvme_tcp.o 00:01:14.129 CC lib/nvme/nvme_opal.o 00:01:14.129 CC lib/nvme/nvme_io_msg.o 00:01:14.129 CC lib/nvme/nvme_zns.o 00:01:14.129 CC lib/nvme/nvme_stubs.o 00:01:14.129 CC lib/nvme/nvme_poll_group.o 00:01:14.129 CC lib/nvme/nvme_cuse.o 00:01:14.129 CC lib/nvme/nvme_rdma.o 00:01:14.129 CC lib/nvme/nvme_auth.o 00:01:15.065 LIB libspdk_thread.a 00:01:15.065 SO libspdk_thread.so.10.0 00:01:15.065 SYMLINK libspdk_thread.so 00:01:15.065 CC lib/blob/blobstore.o 00:01:15.065 CC lib/blob/request.o 00:01:15.065 CC lib/blob/zeroes.o 00:01:15.065 CC lib/blob/blob_bs_dev.o 00:01:15.324 CC lib/accel/accel.o 00:01:15.324 CC lib/accel/accel_sw.o 00:01:15.324 CC lib/accel/accel_rpc.o 00:01:15.324 CC lib/init/subsystem.o 00:01:15.324 CC lib/init/subsystem_rpc.o 00:01:15.324 CC lib/init/json_config.o 00:01:15.324 CC lib/virtio/virtio.o 00:01:15.324 CC lib/virtio/virtio_vhost_user.o 00:01:15.324 CC lib/init/rpc.o 00:01:15.324 CC lib/virtio/virtio_vfio_user.o 00:01:15.324 CC lib/virtio/virtio_pci.o 00:01:15.582 LIB libspdk_init.a 00:01:15.582 SO libspdk_init.so.5.0 00:01:15.582 LIB libspdk_virtio.a 00:01:15.582 SYMLINK libspdk_init.so 00:01:15.582 SO libspdk_virtio.so.7.0 00:01:15.582 SYMLINK libspdk_virtio.so 00:01:15.842 CC lib/event/app.o 00:01:15.842 CC lib/event/reactor.o 00:01:15.842 CC lib/event/log_rpc.o 00:01:15.842 CC lib/event/app_rpc.o 00:01:15.842 CC lib/event/scheduler_static.o 00:01:16.103 LIB libspdk_nvme.a 00:01:16.103 LIB libspdk_accel.a 00:01:16.414 SO libspdk_accel.so.15.0 00:01:16.414 SO libspdk_nvme.so.13.0 00:01:16.414 LIB libspdk_event.a 00:01:16.414 SYMLINK libspdk_accel.so 00:01:16.414 SO libspdk_event.so.13.0 00:01:16.414 SYMLINK libspdk_event.so 00:01:16.677 CC lib/bdev/bdev.o 00:01:16.677 CC lib/bdev/bdev_rpc.o 00:01:16.677 CC lib/bdev/part.o 00:01:16.677 CC lib/bdev/bdev_zone.o 00:01:16.677 CC lib/bdev/scsi_nvme.o 00:01:16.677 SYMLINK libspdk_nvme.so 00:01:18.578 LIB libspdk_blob.a 00:01:18.578 SO libspdk_blob.so.11.0 00:01:18.578 SYMLINK libspdk_blob.so 00:01:18.578 CC lib/lvol/lvol.o 00:01:18.578 CC lib/blobfs/blobfs.o 00:01:18.578 CC lib/blobfs/tree.o 00:01:18.838 LIB libspdk_bdev.a 00:01:18.838 SO libspdk_bdev.so.15.0 00:01:19.096 SYMLINK libspdk_bdev.so 00:01:19.356 CC lib/nvmf/ctrlr.o 00:01:19.356 CC lib/nvmf/ctrlr_bdev.o 00:01:19.356 CC lib/nvmf/ctrlr_discovery.o 00:01:19.356 CC lib/nvmf/subsystem.o 00:01:19.356 CC lib/nvmf/transport.o 00:01:19.356 CC lib/nvmf/nvmf.o 00:01:19.356 CC lib/nvmf/tcp.o 00:01:19.356 CC lib/nvmf/nvmf_rpc.o 00:01:19.356 CC lib/nvmf/rdma.o 00:01:19.356 CC lib/scsi/dev.o 00:01:19.356 CC lib/scsi/lun.o 00:01:19.356 CC lib/scsi/port.o 00:01:19.356 CC lib/scsi/scsi_bdev.o 00:01:19.356 CC lib/scsi/scsi.o 00:01:19.356 CC lib/scsi/task.o 00:01:19.356 CC lib/scsi/scsi_rpc.o 00:01:19.356 CC lib/scsi/scsi_pr.o 00:01:19.356 CC lib/ftl/ftl_core.o 00:01:19.356 CC lib/ftl/ftl_init.o 00:01:19.356 CC lib/ftl/ftl_layout.o 00:01:19.356 CC lib/ftl/ftl_debug.o 00:01:19.356 CC lib/ftl/ftl_io.o 00:01:19.356 CC lib/ftl/ftl_sb.o 00:01:19.356 CC lib/ftl/ftl_l2p.o 00:01:19.356 CC lib/ftl/ftl_l2p_flat.o 00:01:19.356 CC lib/ftl/ftl_nv_cache.o 00:01:19.356 CC lib/ftl/ftl_band.o 00:01:19.356 CC lib/ftl/ftl_writer.o 00:01:19.356 CC lib/ftl/ftl_band_ops.o 00:01:19.356 CC lib/ftl/ftl_rq.o 00:01:19.356 CC lib/ftl/ftl_reloc.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt.o 00:01:19.356 CC lib/ftl/ftl_l2p_cache.o 00:01:19.356 CC lib/ftl/ftl_p2l.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:01:19.356 CC lib/ublk/ublk.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_startup.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_md.o 00:01:19.356 CC lib/ublk/ublk_rpc.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_misc.o 00:01:19.356 CC lib/nbd/nbd.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:01:19.356 CC lib/nbd/nbd_rpc.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_band.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:01:19.356 CC lib/ftl/utils/ftl_conf.o 00:01:19.356 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:01:19.356 CC lib/ftl/utils/ftl_mempool.o 00:01:19.356 CC lib/ftl/utils/ftl_md.o 00:01:19.356 CC lib/ftl/utils/ftl_property.o 00:01:19.356 CC lib/ftl/utils/ftl_bitmap.o 00:01:19.356 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:01:19.356 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:01:19.356 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:01:19.356 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:01:19.356 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:01:19.356 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:01:19.356 CC lib/ftl/upgrade/ftl_sb_v3.o 00:01:19.356 CC lib/ftl/upgrade/ftl_sb_v5.o 00:01:19.356 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:01:19.356 CC lib/ftl/nvc/ftl_nvc_dev.o 00:01:19.356 CC lib/ftl/base/ftl_base_dev.o 00:01:19.356 CC lib/ftl/base/ftl_base_bdev.o 00:01:19.356 CC lib/ftl/ftl_trace.o 00:01:19.615 LIB libspdk_lvol.a 00:01:19.615 SO libspdk_lvol.so.10.0 00:01:19.615 LIB libspdk_blobfs.a 00:01:19.615 SO libspdk_blobfs.so.10.0 00:01:19.615 SYMLINK libspdk_lvol.so 00:01:19.615 SYMLINK libspdk_blobfs.so 00:01:19.874 LIB libspdk_nbd.a 00:01:19.874 SO libspdk_nbd.so.7.0 00:01:19.874 LIB libspdk_scsi.a 00:01:20.131 SYMLINK libspdk_nbd.so 00:01:20.131 SO libspdk_scsi.so.9.0 00:01:20.131 SYMLINK libspdk_scsi.so 00:01:20.131 LIB libspdk_ublk.a 00:01:20.131 SO libspdk_ublk.so.3.0 00:01:20.390 SYMLINK libspdk_ublk.so 00:01:20.390 CC lib/iscsi/conn.o 00:01:20.390 CC lib/iscsi/md5.o 00:01:20.390 CC lib/iscsi/init_grp.o 00:01:20.390 CC lib/iscsi/iscsi.o 00:01:20.390 CC lib/iscsi/portal_grp.o 00:01:20.390 CC lib/iscsi/param.o 00:01:20.390 LIB libspdk_ftl.a 00:01:20.390 CC lib/iscsi/iscsi_subsystem.o 00:01:20.390 CC lib/iscsi/tgt_node.o 00:01:20.390 CC lib/iscsi/task.o 00:01:20.390 CC lib/iscsi/iscsi_rpc.o 00:01:20.390 CC lib/vhost/vhost_rpc.o 00:01:20.390 CC lib/vhost/vhost_scsi.o 00:01:20.390 CC lib/vhost/vhost.o 00:01:20.390 CC lib/vhost/rte_vhost_user.o 00:01:20.390 CC lib/vhost/vhost_blk.o 00:01:20.390 SO libspdk_ftl.so.9.0 00:01:20.955 SYMLINK libspdk_ftl.so 00:01:21.212 LIB libspdk_nvmf.a 00:01:21.212 SO libspdk_nvmf.so.18.0 00:01:21.470 LIB libspdk_vhost.a 00:01:21.470 SO libspdk_vhost.so.8.0 00:01:21.470 SYMLINK libspdk_nvmf.so 00:01:21.470 SYMLINK libspdk_vhost.so 00:01:21.730 LIB libspdk_iscsi.a 00:01:21.730 SO libspdk_iscsi.so.8.0 00:01:21.989 SYMLINK libspdk_iscsi.so 00:01:22.556 CC module/env_dpdk/env_dpdk_rpc.o 00:01:22.556 CC module/accel/dsa/accel_dsa.o 00:01:22.556 CC module/accel/dsa/accel_dsa_rpc.o 00:01:22.556 CC module/accel/error/accel_error.o 00:01:22.556 CC module/accel/error/accel_error_rpc.o 00:01:22.556 CC module/keyring/file/keyring.o 00:01:22.556 CC module/accel/iaa/accel_iaa_rpc.o 00:01:22.556 CC module/keyring/file/keyring_rpc.o 00:01:22.556 CC module/accel/iaa/accel_iaa.o 00:01:22.556 CC module/accel/ioat/accel_ioat.o 00:01:22.556 CC module/scheduler/gscheduler/gscheduler.o 00:01:22.556 CC module/accel/ioat/accel_ioat_rpc.o 00:01:22.556 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:01:22.556 CC module/blob/bdev/blob_bdev.o 00:01:22.556 CC module/sock/posix/posix.o 00:01:22.556 CC module/scheduler/dynamic/scheduler_dynamic.o 00:01:22.556 LIB libspdk_env_dpdk_rpc.a 00:01:22.556 SO libspdk_env_dpdk_rpc.so.6.0 00:01:22.556 LIB libspdk_scheduler_dpdk_governor.a 00:01:22.556 SYMLINK libspdk_env_dpdk_rpc.so 00:01:22.556 SO libspdk_scheduler_dpdk_governor.so.4.0 00:01:22.556 LIB libspdk_scheduler_dynamic.a 00:01:22.556 LIB libspdk_accel_iaa.a 00:01:22.556 LIB libspdk_keyring_file.a 00:01:22.556 LIB libspdk_scheduler_gscheduler.a 00:01:22.556 SO libspdk_scheduler_dynamic.so.4.0 00:01:22.556 SO libspdk_scheduler_gscheduler.so.4.0 00:01:22.813 SO libspdk_keyring_file.so.1.0 00:01:22.813 SO libspdk_accel_iaa.so.3.0 00:01:22.813 LIB libspdk_accel_error.a 00:01:22.813 SYMLINK libspdk_scheduler_dpdk_governor.so 00:01:22.813 LIB libspdk_accel_ioat.a 00:01:22.813 LIB libspdk_blob_bdev.a 00:01:22.813 SO libspdk_accel_error.so.2.0 00:01:22.813 SYMLINK libspdk_scheduler_dynamic.so 00:01:22.813 SO libspdk_accel_ioat.so.6.0 00:01:22.813 SYMLINK libspdk_scheduler_gscheduler.so 00:01:22.813 SO libspdk_blob_bdev.so.11.0 00:01:22.813 SYMLINK libspdk_keyring_file.so 00:01:22.813 SYMLINK libspdk_accel_iaa.so 00:01:22.813 LIB libspdk_accel_dsa.a 00:01:22.813 SYMLINK libspdk_accel_error.so 00:01:22.813 SO libspdk_accel_dsa.so.5.0 00:01:22.813 SYMLINK libspdk_accel_ioat.so 00:01:22.813 SYMLINK libspdk_blob_bdev.so 00:01:22.813 SYMLINK libspdk_accel_dsa.so 00:01:23.073 CC module/blobfs/bdev/blobfs_bdev.o 00:01:23.073 CC module/bdev/error/vbdev_error.o 00:01:23.073 CC module/bdev/zone_block/vbdev_zone_block.o 00:01:23.073 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:01:23.073 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:01:23.073 CC module/bdev/error/vbdev_error_rpc.o 00:01:23.073 CC module/bdev/passthru/vbdev_passthru.o 00:01:23.073 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:01:23.073 CC module/bdev/split/vbdev_split.o 00:01:23.073 CC module/bdev/split/vbdev_split_rpc.o 00:01:23.073 CC module/bdev/null/bdev_null.o 00:01:23.073 CC module/bdev/raid/bdev_raid_rpc.o 00:01:23.073 CC module/bdev/ftl/bdev_ftl_rpc.o 00:01:23.073 CC module/bdev/raid/bdev_raid.o 00:01:23.073 CC module/bdev/null/bdev_null_rpc.o 00:01:23.073 CC module/bdev/ftl/bdev_ftl.o 00:01:23.073 CC module/bdev/raid/bdev_raid_sb.o 00:01:23.073 CC module/bdev/raid/raid1.o 00:01:23.073 CC module/bdev/raid/raid0.o 00:01:23.073 CC module/bdev/delay/vbdev_delay.o 00:01:23.073 CC module/bdev/lvol/vbdev_lvol.o 00:01:23.073 CC module/bdev/gpt/gpt.o 00:01:23.073 CC module/bdev/delay/vbdev_delay_rpc.o 00:01:23.073 CC module/bdev/raid/concat.o 00:01:23.073 CC module/bdev/gpt/vbdev_gpt.o 00:01:23.073 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:01:23.073 CC module/bdev/virtio/bdev_virtio_rpc.o 00:01:23.073 CC module/bdev/virtio/bdev_virtio_scsi.o 00:01:23.073 CC module/bdev/virtio/bdev_virtio_blk.o 00:01:23.073 CC module/bdev/nvme/bdev_nvme.o 00:01:23.073 CC module/bdev/aio/bdev_aio_rpc.o 00:01:23.073 LIB libspdk_sock_posix.a 00:01:23.073 CC module/bdev/malloc/bdev_malloc.o 00:01:23.073 CC module/bdev/aio/bdev_aio.o 00:01:23.073 CC module/bdev/nvme/nvme_rpc.o 00:01:23.073 CC module/bdev/nvme/bdev_nvme_rpc.o 00:01:23.073 CC module/bdev/malloc/bdev_malloc_rpc.o 00:01:23.073 CC module/bdev/nvme/bdev_mdns_client.o 00:01:23.073 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:01:23.073 CC module/bdev/iscsi/bdev_iscsi.o 00:01:23.073 CC module/bdev/nvme/vbdev_opal.o 00:01:23.073 CC module/bdev/nvme/vbdev_opal_rpc.o 00:01:23.073 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:01:23.073 SO libspdk_sock_posix.so.6.0 00:01:23.332 SYMLINK libspdk_sock_posix.so 00:01:23.332 LIB libspdk_blobfs_bdev.a 00:01:23.332 SO libspdk_blobfs_bdev.so.6.0 00:01:23.332 LIB libspdk_bdev_error.a 00:01:23.332 LIB libspdk_bdev_passthru.a 00:01:23.332 SO libspdk_bdev_error.so.6.0 00:01:23.332 LIB libspdk_bdev_split.a 00:01:23.332 LIB libspdk_bdev_zone_block.a 00:01:23.332 SO libspdk_bdev_passthru.so.6.0 00:01:23.332 SYMLINK libspdk_blobfs_bdev.so 00:01:23.589 LIB libspdk_bdev_null.a 00:01:23.589 SO libspdk_bdev_split.so.6.0 00:01:23.589 SYMLINK libspdk_bdev_error.so 00:01:23.589 SO libspdk_bdev_zone_block.so.6.0 00:01:23.589 LIB libspdk_bdev_gpt.a 00:01:23.590 LIB libspdk_bdev_ftl.a 00:01:23.590 SO libspdk_bdev_null.so.6.0 00:01:23.590 SYMLINK libspdk_bdev_passthru.so 00:01:23.590 SO libspdk_bdev_ftl.so.6.0 00:01:23.590 SO libspdk_bdev_gpt.so.6.0 00:01:23.590 SYMLINK libspdk_bdev_split.so 00:01:23.590 SYMLINK libspdk_bdev_zone_block.so 00:01:23.590 SYMLINK libspdk_bdev_null.so 00:01:23.590 LIB libspdk_bdev_aio.a 00:01:23.590 LIB libspdk_bdev_delay.a 00:01:23.590 SYMLINK libspdk_bdev_gpt.so 00:01:23.590 SYMLINK libspdk_bdev_ftl.so 00:01:23.590 LIB libspdk_bdev_iscsi.a 00:01:23.590 SO libspdk_bdev_aio.so.6.0 00:01:23.590 LIB libspdk_bdev_malloc.a 00:01:23.590 SO libspdk_bdev_delay.so.6.0 00:01:23.590 SO libspdk_bdev_iscsi.so.6.0 00:01:23.590 SO libspdk_bdev_malloc.so.6.0 00:01:23.590 LIB libspdk_bdev_virtio.a 00:01:23.590 SO libspdk_bdev_virtio.so.6.0 00:01:23.590 SYMLINK libspdk_bdev_aio.so 00:01:23.590 SYMLINK libspdk_bdev_delay.so 00:01:23.590 SYMLINK libspdk_bdev_iscsi.so 00:01:23.590 SYMLINK libspdk_bdev_malloc.so 00:01:23.590 SYMLINK libspdk_bdev_virtio.so 00:01:23.590 LIB libspdk_bdev_lvol.a 00:01:23.848 SO libspdk_bdev_lvol.so.6.0 00:01:23.848 SYMLINK libspdk_bdev_lvol.so 00:01:24.415 LIB libspdk_bdev_raid.a 00:01:24.415 SO libspdk_bdev_raid.so.6.0 00:01:24.415 SYMLINK libspdk_bdev_raid.so 00:01:24.982 LIB libspdk_bdev_nvme.a 00:01:24.982 SO libspdk_bdev_nvme.so.7.0 00:01:24.982 SYMLINK libspdk_bdev_nvme.so 00:01:25.547 CC module/event/subsystems/iobuf/iobuf.o 00:01:25.547 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:01:25.547 CC module/event/subsystems/keyring/keyring.o 00:01:25.547 CC module/event/subsystems/vmd/vmd.o 00:01:25.547 CC module/event/subsystems/vmd/vmd_rpc.o 00:01:25.547 CC module/event/subsystems/sock/sock.o 00:01:25.547 CC module/event/subsystems/scheduler/scheduler.o 00:01:25.547 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:01:25.547 LIB libspdk_event_sock.a 00:01:25.547 LIB libspdk_event_vhost_blk.a 00:01:25.547 SO libspdk_event_vhost_blk.so.3.0 00:01:25.547 SO libspdk_event_sock.so.5.0 00:01:25.547 LIB libspdk_event_keyring.a 00:01:25.547 LIB libspdk_event_iobuf.a 00:01:25.547 LIB libspdk_event_vmd.a 00:01:25.547 LIB libspdk_event_scheduler.a 00:01:25.547 SO libspdk_event_keyring.so.1.0 00:01:25.547 SYMLINK libspdk_event_sock.so 00:01:25.547 SO libspdk_event_vmd.so.6.0 00:01:25.547 SO libspdk_event_scheduler.so.4.0 00:01:25.547 SO libspdk_event_iobuf.so.3.0 00:01:25.547 SYMLINK libspdk_event_vhost_blk.so 00:01:25.805 SYMLINK libspdk_event_keyring.so 00:01:25.805 SYMLINK libspdk_event_scheduler.so 00:01:25.805 SYMLINK libspdk_event_vmd.so 00:01:25.805 SYMLINK libspdk_event_iobuf.so 00:01:26.065 CC module/event/subsystems/accel/accel.o 00:01:26.065 LIB libspdk_event_accel.a 00:01:26.065 SO libspdk_event_accel.so.6.0 00:01:26.065 SYMLINK libspdk_event_accel.so 00:01:26.325 CC module/event/subsystems/bdev/bdev.o 00:01:26.584 LIB libspdk_event_bdev.a 00:01:26.584 SO libspdk_event_bdev.so.6.0 00:01:26.584 SYMLINK libspdk_event_bdev.so 00:01:26.842 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:01:26.842 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:01:26.842 CC module/event/subsystems/scsi/scsi.o 00:01:26.842 CC module/event/subsystems/ublk/ublk.o 00:01:26.842 CC module/event/subsystems/nbd/nbd.o 00:01:27.100 LIB libspdk_event_ublk.a 00:01:27.100 LIB libspdk_event_scsi.a 00:01:27.100 LIB libspdk_event_nbd.a 00:01:27.100 SO libspdk_event_ublk.so.3.0 00:01:27.100 SO libspdk_event_scsi.so.6.0 00:01:27.100 SO libspdk_event_nbd.so.6.0 00:01:27.100 SYMLINK libspdk_event_ublk.so 00:01:27.100 SYMLINK libspdk_event_scsi.so 00:01:27.100 SYMLINK libspdk_event_nbd.so 00:01:27.100 LIB libspdk_event_nvmf.a 00:01:27.100 SO libspdk_event_nvmf.so.6.0 00:01:27.100 SYMLINK libspdk_event_nvmf.so 00:01:27.359 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:01:27.359 CC module/event/subsystems/iscsi/iscsi.o 00:01:27.359 LIB libspdk_event_vhost_scsi.a 00:01:27.359 SO libspdk_event_vhost_scsi.so.3.0 00:01:27.359 LIB libspdk_event_iscsi.a 00:01:27.359 SYMLINK libspdk_event_vhost_scsi.so 00:01:27.359 SO libspdk_event_iscsi.so.6.0 00:01:27.617 SYMLINK libspdk_event_iscsi.so 00:01:27.617 SO libspdk.so.6.0 00:01:27.617 SYMLINK libspdk.so 00:01:27.875 CC app/spdk_top/spdk_top.o 00:01:27.875 CC app/trace_record/trace_record.o 00:01:27.875 CC app/spdk_nvme_identify/identify.o 00:01:27.875 CC app/spdk_lspci/spdk_lspci.o 00:01:27.876 CC app/spdk_nvme_discover/discovery_aer.o 00:01:27.876 CXX app/trace/trace.o 00:01:27.876 CC app/spdk_nvme_perf/perf.o 00:01:27.876 TEST_HEADER include/spdk/accel.h 00:01:27.876 TEST_HEADER include/spdk/accel_module.h 00:01:27.876 TEST_HEADER include/spdk/assert.h 00:01:27.876 TEST_HEADER include/spdk/barrier.h 00:01:27.876 TEST_HEADER include/spdk/base64.h 00:01:27.876 TEST_HEADER include/spdk/bdev.h 00:01:27.876 TEST_HEADER include/spdk/bdev_zone.h 00:01:27.876 TEST_HEADER include/spdk/bdev_module.h 00:01:27.876 CC test/rpc_client/rpc_client_test.o 00:01:27.876 TEST_HEADER include/spdk/bit_array.h 00:01:27.876 TEST_HEADER include/spdk/bit_pool.h 00:01:27.876 TEST_HEADER include/spdk/blob_bdev.h 00:01:27.876 TEST_HEADER include/spdk/blobfs_bdev.h 00:01:27.876 TEST_HEADER include/spdk/blob.h 00:01:27.876 TEST_HEADER include/spdk/blobfs.h 00:01:27.876 TEST_HEADER include/spdk/conf.h 00:01:27.876 TEST_HEADER include/spdk/config.h 00:01:27.876 TEST_HEADER include/spdk/cpuset.h 00:01:27.876 TEST_HEADER include/spdk/crc16.h 00:01:27.876 TEST_HEADER include/spdk/crc32.h 00:01:27.876 TEST_HEADER include/spdk/crc64.h 00:01:27.876 TEST_HEADER include/spdk/dif.h 00:01:27.876 TEST_HEADER include/spdk/dma.h 00:01:27.876 TEST_HEADER include/spdk/endian.h 00:01:27.876 TEST_HEADER include/spdk/env.h 00:01:27.876 TEST_HEADER include/spdk/event.h 00:01:27.876 TEST_HEADER include/spdk/fd.h 00:01:27.876 TEST_HEADER include/spdk/fd_group.h 00:01:27.876 TEST_HEADER include/spdk/file.h 00:01:27.876 TEST_HEADER include/spdk/env_dpdk.h 00:01:27.876 TEST_HEADER include/spdk/ftl.h 00:01:27.876 TEST_HEADER include/spdk/gpt_spec.h 00:01:27.876 TEST_HEADER include/spdk/histogram_data.h 00:01:27.876 TEST_HEADER include/spdk/hexlify.h 00:01:27.876 CC app/vhost/vhost.o 00:01:27.876 TEST_HEADER include/spdk/idxd.h 00:01:27.876 TEST_HEADER include/spdk/init.h 00:01:27.876 TEST_HEADER include/spdk/ioat.h 00:01:27.876 TEST_HEADER include/spdk/idxd_spec.h 00:01:27.876 CC app/spdk_dd/spdk_dd.o 00:01:27.876 TEST_HEADER include/spdk/ioat_spec.h 00:01:27.876 TEST_HEADER include/spdk/json.h 00:01:27.876 CC app/nvmf_tgt/nvmf_main.o 00:01:27.876 TEST_HEADER include/spdk/iscsi_spec.h 00:01:27.876 TEST_HEADER include/spdk/jsonrpc.h 00:01:27.876 TEST_HEADER include/spdk/keyring.h 00:01:27.876 TEST_HEADER include/spdk/keyring_module.h 00:01:27.876 TEST_HEADER include/spdk/likely.h 00:01:27.876 TEST_HEADER include/spdk/lvol.h 00:01:27.876 CC app/iscsi_tgt/iscsi_tgt.o 00:01:27.876 TEST_HEADER include/spdk/log.h 00:01:27.876 TEST_HEADER include/spdk/memory.h 00:01:27.876 TEST_HEADER include/spdk/mmio.h 00:01:27.876 TEST_HEADER include/spdk/nbd.h 00:01:27.876 TEST_HEADER include/spdk/notify.h 00:01:27.876 TEST_HEADER include/spdk/nvme.h 00:01:27.876 TEST_HEADER include/spdk/nvme_intel.h 00:01:27.876 CC examples/interrupt_tgt/interrupt_tgt.o 00:01:28.143 TEST_HEADER include/spdk/nvme_ocssd.h 00:01:28.143 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:01:28.143 TEST_HEADER include/spdk/nvme_spec.h 00:01:28.143 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:01:28.143 TEST_HEADER include/spdk/nvmf_cmd.h 00:01:28.143 CC app/spdk_tgt/spdk_tgt.o 00:01:28.143 TEST_HEADER include/spdk/nvme_zns.h 00:01:28.143 TEST_HEADER include/spdk/nvmf.h 00:01:28.143 TEST_HEADER include/spdk/nvmf_spec.h 00:01:28.143 TEST_HEADER include/spdk/nvmf_transport.h 00:01:28.143 TEST_HEADER include/spdk/opal.h 00:01:28.143 TEST_HEADER include/spdk/opal_spec.h 00:01:28.143 TEST_HEADER include/spdk/pci_ids.h 00:01:28.143 TEST_HEADER include/spdk/queue.h 00:01:28.143 TEST_HEADER include/spdk/pipe.h 00:01:28.143 TEST_HEADER include/spdk/reduce.h 00:01:28.143 TEST_HEADER include/spdk/scheduler.h 00:01:28.143 TEST_HEADER include/spdk/rpc.h 00:01:28.143 TEST_HEADER include/spdk/scsi_spec.h 00:01:28.143 TEST_HEADER include/spdk/scsi.h 00:01:28.143 TEST_HEADER include/spdk/sock.h 00:01:28.143 TEST_HEADER include/spdk/stdinc.h 00:01:28.143 TEST_HEADER include/spdk/trace.h 00:01:28.143 TEST_HEADER include/spdk/string.h 00:01:28.143 TEST_HEADER include/spdk/thread.h 00:01:28.143 TEST_HEADER include/spdk/tree.h 00:01:28.143 TEST_HEADER include/spdk/trace_parser.h 00:01:28.143 TEST_HEADER include/spdk/util.h 00:01:28.143 TEST_HEADER include/spdk/ublk.h 00:01:28.143 TEST_HEADER include/spdk/version.h 00:01:28.143 TEST_HEADER include/spdk/uuid.h 00:01:28.143 TEST_HEADER include/spdk/vfio_user_pci.h 00:01:28.143 TEST_HEADER include/spdk/vfio_user_spec.h 00:01:28.143 TEST_HEADER include/spdk/vhost.h 00:01:28.143 TEST_HEADER include/spdk/vmd.h 00:01:28.143 CXX test/cpp_headers/accel.o 00:01:28.143 TEST_HEADER include/spdk/zipf.h 00:01:28.143 TEST_HEADER include/spdk/xor.h 00:01:28.143 CXX test/cpp_headers/assert.o 00:01:28.143 CXX test/cpp_headers/accel_module.o 00:01:28.143 CXX test/cpp_headers/base64.o 00:01:28.143 CXX test/cpp_headers/barrier.o 00:01:28.143 CXX test/cpp_headers/bdev.o 00:01:28.143 CXX test/cpp_headers/bdev_module.o 00:01:28.143 CXX test/cpp_headers/bdev_zone.o 00:01:28.143 CXX test/cpp_headers/bit_pool.o 00:01:28.143 CXX test/cpp_headers/bit_array.o 00:01:28.143 CXX test/cpp_headers/blob_bdev.o 00:01:28.143 CXX test/cpp_headers/blobfs.o 00:01:28.143 CXX test/cpp_headers/blobfs_bdev.o 00:01:28.143 CXX test/cpp_headers/blob.o 00:01:28.143 CXX test/cpp_headers/conf.o 00:01:28.143 CXX test/cpp_headers/config.o 00:01:28.143 CXX test/cpp_headers/cpuset.o 00:01:28.143 CXX test/cpp_headers/crc16.o 00:01:28.143 CXX test/cpp_headers/crc32.o 00:01:28.143 CXX test/cpp_headers/crc64.o 00:01:28.143 CXX test/cpp_headers/dma.o 00:01:28.143 CXX test/cpp_headers/dif.o 00:01:28.143 CXX test/cpp_headers/env.o 00:01:28.143 CXX test/cpp_headers/endian.o 00:01:28.143 CXX test/cpp_headers/env_dpdk.o 00:01:28.143 CXX test/cpp_headers/event.o 00:01:28.143 CXX test/cpp_headers/fd_group.o 00:01:28.143 CXX test/cpp_headers/fd.o 00:01:28.143 CXX test/cpp_headers/file.o 00:01:28.143 CXX test/cpp_headers/ftl.o 00:01:28.143 CXX test/cpp_headers/histogram_data.o 00:01:28.143 CXX test/cpp_headers/hexlify.o 00:01:28.143 CXX test/cpp_headers/gpt_spec.o 00:01:28.143 CXX test/cpp_headers/idxd.o 00:01:28.143 CXX test/cpp_headers/idxd_spec.o 00:01:28.143 CXX test/cpp_headers/init.o 00:01:28.143 CXX test/cpp_headers/ioat_spec.o 00:01:28.143 CXX test/cpp_headers/iscsi_spec.o 00:01:28.143 CXX test/cpp_headers/ioat.o 00:01:28.143 CXX test/cpp_headers/json.o 00:01:28.143 CXX test/cpp_headers/keyring.o 00:01:28.143 CXX test/cpp_headers/jsonrpc.o 00:01:28.143 CXX test/cpp_headers/keyring_module.o 00:01:28.144 CXX test/cpp_headers/likely.o 00:01:28.144 CXX test/cpp_headers/log.o 00:01:28.144 CXX test/cpp_headers/lvol.o 00:01:28.144 CXX test/cpp_headers/memory.o 00:01:28.144 CXX test/cpp_headers/nbd.o 00:01:28.144 CXX test/cpp_headers/mmio.o 00:01:28.144 CXX test/cpp_headers/notify.o 00:01:28.144 CXX test/cpp_headers/nvme_intel.o 00:01:28.144 CXX test/cpp_headers/nvme.o 00:01:28.144 CXX test/cpp_headers/nvme_ocssd.o 00:01:28.144 CXX test/cpp_headers/nvme_ocssd_spec.o 00:01:28.144 CC examples/ioat/perf/perf.o 00:01:28.144 CC examples/nvme/reconnect/reconnect.o 00:01:28.144 CC examples/vmd/lsvmd/lsvmd.o 00:01:28.144 CC test/env/vtophys/vtophys.o 00:01:28.407 CC examples/vmd/led/led.o 00:01:28.407 CC examples/accel/perf/accel_perf.o 00:01:28.407 CC app/fio/nvme/fio_plugin.o 00:01:28.407 CC test/app/jsoncat/jsoncat.o 00:01:28.407 CC test/nvme/e2edp/nvme_dp.o 00:01:28.407 CC examples/nvme/cmb_copy/cmb_copy.o 00:01:28.407 CC test/event/reactor_perf/reactor_perf.o 00:01:28.407 CC examples/idxd/perf/perf.o 00:01:28.407 CC test/app/stub/stub.o 00:01:28.407 CC test/thread/poller_perf/poller_perf.o 00:01:28.407 CC examples/nvme/hello_world/hello_world.o 00:01:28.407 CC examples/nvme/nvme_manage/nvme_manage.o 00:01:28.407 CC test/nvme/reserve/reserve.o 00:01:28.407 CC test/app/histogram_perf/histogram_perf.o 00:01:28.407 CC examples/ioat/verify/verify.o 00:01:28.407 CC test/env/memory/memory_ut.o 00:01:28.407 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:01:28.407 CC test/nvme/overhead/overhead.o 00:01:28.407 CC test/nvme/connect_stress/connect_stress.o 00:01:28.407 CC test/event/event_perf/event_perf.o 00:01:28.407 CC examples/nvme/abort/abort.o 00:01:28.407 CC test/nvme/compliance/nvme_compliance.o 00:01:28.407 CC test/nvme/err_injection/err_injection.o 00:01:28.407 CC test/nvme/reset/reset.o 00:01:28.407 CC examples/util/zipf/zipf.o 00:01:28.407 CC test/event/reactor/reactor.o 00:01:28.407 CC test/env/pci/pci_ut.o 00:01:28.407 CC examples/nvme/hotplug/hotplug.o 00:01:28.407 CC test/event/app_repeat/app_repeat.o 00:01:28.407 CC test/nvme/doorbell_aers/doorbell_aers.o 00:01:28.407 CC test/nvme/boot_partition/boot_partition.o 00:01:28.407 CC examples/thread/thread/thread_ex.o 00:01:28.407 CC test/nvme/sgl/sgl.o 00:01:28.407 CC examples/nvme/arbitration/arbitration.o 00:01:28.407 CC examples/sock/hello_world/hello_sock.o 00:01:28.407 CC test/nvme/fused_ordering/fused_ordering.o 00:01:28.407 CC test/nvme/cuse/cuse.o 00:01:28.407 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:01:28.407 CC test/app/bdev_svc/bdev_svc.o 00:01:28.407 CC test/nvme/aer/aer.o 00:01:28.407 CC examples/bdev/hello_world/hello_bdev.o 00:01:28.407 CC test/nvme/fdp/fdp.o 00:01:28.407 CC test/accel/dif/dif.o 00:01:28.407 CC test/nvme/startup/startup.o 00:01:28.407 CC examples/blob/hello_world/hello_blob.o 00:01:28.407 CC examples/nvmf/nvmf/nvmf.o 00:01:28.407 CC examples/blob/cli/blobcli.o 00:01:28.407 CC test/blobfs/mkfs/mkfs.o 00:01:28.407 CC test/nvme/simple_copy/simple_copy.o 00:01:28.407 CC app/fio/bdev/fio_plugin.o 00:01:28.407 CC test/bdev/bdevio/bdevio.o 00:01:28.407 CC test/event/scheduler/scheduler.o 00:01:28.407 CC test/dma/test_dma/test_dma.o 00:01:28.407 CC examples/bdev/bdevperf/bdevperf.o 00:01:28.672 LINK nvmf_tgt 00:01:28.672 LINK spdk_lspci 00:01:28.672 CC test/env/mem_callbacks/mem_callbacks.o 00:01:28.672 LINK interrupt_tgt 00:01:28.672 LINK rpc_client_test 00:01:28.672 CC test/lvol/esnap/esnap.o 00:01:28.672 LINK vhost 00:01:28.933 LINK spdk_tgt 00:01:28.933 LINK jsoncat 00:01:28.933 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:01:28.933 LINK iscsi_tgt 00:01:28.933 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:01:28.933 LINK histogram_perf 00:01:28.933 LINK spdk_nvme_discover 00:01:28.933 LINK vtophys 00:01:28.933 LINK reactor 00:01:28.933 LINK lsvmd 00:01:28.933 LINK led 00:01:28.933 LINK poller_perf 00:01:29.196 LINK spdk_trace_record 00:01:29.196 LINK boot_partition 00:01:29.196 LINK stub 00:01:29.196 LINK ioat_perf 00:01:29.196 LINK pmr_persistence 00:01:29.196 CXX test/cpp_headers/nvme_spec.o 00:01:29.196 LINK reactor_perf 00:01:29.196 CXX test/cpp_headers/nvme_zns.o 00:01:29.196 LINK env_dpdk_post_init 00:01:29.196 LINK connect_stress 00:01:29.196 CXX test/cpp_headers/nvmf_cmd.o 00:01:29.196 LINK err_injection 00:01:29.196 CXX test/cpp_headers/nvmf_fc_spec.o 00:01:29.196 LINK event_perf 00:01:29.196 CXX test/cpp_headers/nvmf.o 00:01:29.196 CXX test/cpp_headers/nvmf_spec.o 00:01:29.196 CXX test/cpp_headers/nvmf_transport.o 00:01:29.196 LINK mkfs 00:01:29.196 CXX test/cpp_headers/opal.o 00:01:29.196 CXX test/cpp_headers/opal_spec.o 00:01:29.196 LINK app_repeat 00:01:29.196 CXX test/cpp_headers/pci_ids.o 00:01:29.196 CXX test/cpp_headers/pipe.o 00:01:29.196 LINK thread 00:01:29.196 LINK hello_bdev 00:01:29.196 LINK fused_ordering 00:01:29.196 CXX test/cpp_headers/queue.o 00:01:29.196 CXX test/cpp_headers/reduce.o 00:01:29.196 LINK cmb_copy 00:01:29.196 LINK spdk_dd 00:01:29.196 CXX test/cpp_headers/rpc.o 00:01:29.196 LINK startup 00:01:29.196 CXX test/cpp_headers/scsi.o 00:01:29.196 CXX test/cpp_headers/scheduler.o 00:01:29.196 CXX test/cpp_headers/scsi_spec.o 00:01:29.196 CXX test/cpp_headers/sock.o 00:01:29.196 LINK zipf 00:01:29.196 LINK simple_copy 00:01:29.196 CXX test/cpp_headers/stdinc.o 00:01:29.196 LINK hello_world 00:01:29.196 CXX test/cpp_headers/string.o 00:01:29.196 CXX test/cpp_headers/thread.o 00:01:29.196 CXX test/cpp_headers/trace.o 00:01:29.196 LINK bdev_svc 00:01:29.196 CXX test/cpp_headers/trace_parser.o 00:01:29.196 CXX test/cpp_headers/tree.o 00:01:29.196 LINK hello_sock 00:01:29.196 CXX test/cpp_headers/util.o 00:01:29.196 CXX test/cpp_headers/ublk.o 00:01:29.196 CXX test/cpp_headers/uuid.o 00:01:29.196 CXX test/cpp_headers/version.o 00:01:29.196 CXX test/cpp_headers/vfio_user_pci.o 00:01:29.196 CXX test/cpp_headers/vfio_user_spec.o 00:01:29.196 CXX test/cpp_headers/vhost.o 00:01:29.196 CXX test/cpp_headers/vmd.o 00:01:29.196 CXX test/cpp_headers/xor.o 00:01:29.196 LINK hotplug 00:01:29.196 CXX test/cpp_headers/zipf.o 00:01:29.196 LINK verify 00:01:29.196 LINK sgl 00:01:29.196 LINK reserve 00:01:29.196 LINK doorbell_aers 00:01:29.196 LINK scheduler 00:01:29.456 LINK nvmf 00:01:29.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:01:29.456 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:01:29.456 LINK nvme_dp 00:01:29.456 LINK hello_blob 00:01:29.456 LINK reconnect 00:01:29.456 LINK reset 00:01:29.456 LINK arbitration 00:01:29.456 LINK overhead 00:01:29.456 LINK aer 00:01:29.456 LINK test_dma 00:01:29.456 LINK fdp 00:01:29.456 LINK pci_ut 00:01:29.456 LINK nvme_compliance 00:01:29.456 LINK idxd_perf 00:01:29.456 LINK abort 00:01:29.714 LINK bdevio 00:01:29.714 LINK dif 00:01:29.715 LINK accel_perf 00:01:29.715 LINK spdk_nvme 00:01:29.715 LINK spdk_bdev 00:01:29.715 LINK nvme_manage 00:01:29.715 LINK spdk_trace 00:01:29.715 LINK mem_callbacks 00:01:29.715 LINK blobcli 00:01:29.715 LINK spdk_nvme_identify 00:01:29.973 LINK nvme_fuzz 00:01:29.973 LINK spdk_nvme_perf 00:01:29.973 LINK spdk_top 00:01:29.973 LINK vhost_fuzz 00:01:29.973 LINK memory_ut 00:01:29.973 LINK bdevperf 00:01:30.230 LINK cuse 00:01:30.794 LINK iscsi_fuzz 00:01:32.694 LINK esnap 00:01:32.953 00:01:32.953 real 0m38.989s 00:01:32.953 user 6m0.130s 00:01:32.953 sys 5m3.062s 00:01:32.953 00:35:25 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:01:32.953 00:35:25 -- common/autotest_common.sh@10 -- $ set +x 00:01:32.953 ************************************ 00:01:32.953 END TEST make 00:01:32.953 ************************************ 00:01:32.953 00:35:25 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:01:32.953 00:35:25 -- pm/common@30 -- $ signal_monitor_resources TERM 00:01:32.953 00:35:25 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:01:32.953 00:35:25 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.953 00:35:25 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:01:32.953 00:35:25 -- pm/common@45 -- $ pid=2422851 00:01:32.953 00:35:25 -- pm/common@52 -- $ sudo kill -TERM 2422851 00:01:32.953 00:35:25 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.953 00:35:25 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:01:32.953 00:35:25 -- pm/common@45 -- $ pid=2422848 00:01:32.953 00:35:25 -- pm/common@52 -- $ sudo kill -TERM 2422848 00:01:32.953 00:35:25 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.953 00:35:25 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:01:32.953 00:35:25 -- pm/common@45 -- $ pid=2422847 00:01:32.953 00:35:25 -- pm/common@52 -- $ sudo kill -TERM 2422847 00:01:32.953 00:35:25 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.953 00:35:25 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:01:32.953 00:35:25 -- pm/common@45 -- $ pid=2422846 00:01:32.953 00:35:25 -- pm/common@52 -- $ sudo kill -TERM 2422846 00:01:32.953 00:35:25 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:01:32.953 00:35:25 -- nvmf/common.sh@7 -- # uname -s 00:01:32.953 00:35:25 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:01:32.953 00:35:25 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:01:32.953 00:35:25 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:01:32.953 00:35:25 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:01:32.953 00:35:25 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:01:32.953 00:35:25 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:01:32.953 00:35:25 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:01:32.953 00:35:25 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:01:32.953 00:35:25 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:01:32.953 00:35:25 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:01:32.953 00:35:25 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:01:32.953 00:35:25 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:01:32.953 00:35:25 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:01:32.953 00:35:25 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:01:32.953 00:35:25 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:01:32.953 00:35:25 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:01:32.953 00:35:25 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:01:32.953 00:35:25 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:01:32.953 00:35:25 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:32.953 00:35:25 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:32.954 00:35:25 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.954 00:35:25 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.954 00:35:25 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.954 00:35:25 -- paths/export.sh@5 -- # export PATH 00:01:32.954 00:35:25 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:32.954 00:35:25 -- nvmf/common.sh@47 -- # : 0 00:01:32.954 00:35:25 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:01:32.954 00:35:25 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:01:32.954 00:35:25 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:01:32.954 00:35:25 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:01:32.954 00:35:25 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:01:32.954 00:35:25 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:01:32.954 00:35:25 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:01:32.954 00:35:25 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:01:32.954 00:35:25 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:01:32.954 00:35:25 -- spdk/autotest.sh@32 -- # uname -s 00:01:32.954 00:35:25 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:01:32.954 00:35:25 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:01:32.954 00:35:25 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:32.954 00:35:25 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:01:32.954 00:35:25 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/coredumps 00:01:32.954 00:35:25 -- spdk/autotest.sh@44 -- # modprobe nbd 00:01:32.954 00:35:25 -- spdk/autotest.sh@46 -- # type -P udevadm 00:01:32.954 00:35:25 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:01:32.954 00:35:25 -- spdk/autotest.sh@48 -- # udevadm_pid=2482166 00:01:32.954 00:35:25 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:01:32.954 00:35:25 -- pm/common@17 -- # local monitor 00:01:32.954 00:35:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.954 00:35:25 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2482167 00:01:32.954 00:35:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.954 00:35:25 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2482168 00:01:32.954 00:35:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.954 00:35:25 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:01:32.954 00:35:25 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2482170 00:01:32.954 00:35:25 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:01:32.954 00:35:25 -- pm/common@21 -- # date +%s 00:01:32.954 00:35:25 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=2482171 00:01:32.954 00:35:25 -- pm/common@26 -- # sleep 1 00:01:32.954 00:35:25 -- pm/common@21 -- # date +%s 00:01:32.954 00:35:25 -- pm/common@21 -- # date +%s 00:01:32.954 00:35:25 -- pm/common@21 -- # date +%s 00:01:32.954 00:35:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170925 00:01:32.954 00:35:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170925 00:01:32.954 00:35:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170925 00:01:32.954 00:35:25 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1714170925 00:01:32.954 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170925_collect-bmc-pm.bmc.pm.log 00:01:32.954 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170925_collect-vmstat.pm.log 00:01:32.954 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170925_collect-cpu-temp.pm.log 00:01:32.954 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autotest.sh.1714170925_collect-cpu-load.pm.log 00:01:34.329 00:35:26 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:01:34.329 00:35:26 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:01:34.329 00:35:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:34.329 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:01:34.329 00:35:26 -- spdk/autotest.sh@59 -- # create_test_list 00:01:34.329 00:35:26 -- common/autotest_common.sh@734 -- # xtrace_disable 00:01:34.329 00:35:26 -- common/autotest_common.sh@10 -- # set +x 00:01:34.329 00:35:26 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/autotest.sh 00:01:34.329 00:35:26 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:34.329 00:35:26 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:34.329 00:35:26 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:01:34.329 00:35:26 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:01:34.329 00:35:26 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:01:34.329 00:35:26 -- common/autotest_common.sh@1441 -- # uname 00:01:34.329 00:35:26 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:01:34.329 00:35:26 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:01:34.329 00:35:26 -- common/autotest_common.sh@1461 -- # uname 00:01:34.329 00:35:26 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:01:34.329 00:35:26 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:01:34.329 00:35:26 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:01:34.329 00:35:26 -- spdk/autotest.sh@72 -- # hash lcov 00:01:34.329 00:35:26 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:01:34.329 00:35:26 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:01:34.329 --rc lcov_branch_coverage=1 00:01:34.329 --rc lcov_function_coverage=1 00:01:34.329 --rc genhtml_branch_coverage=1 00:01:34.329 --rc genhtml_function_coverage=1 00:01:34.329 --rc genhtml_legend=1 00:01:34.329 --rc geninfo_all_blocks=1 00:01:34.329 ' 00:01:34.329 00:35:26 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:01:34.329 --rc lcov_branch_coverage=1 00:01:34.329 --rc lcov_function_coverage=1 00:01:34.329 --rc genhtml_branch_coverage=1 00:01:34.329 --rc genhtml_function_coverage=1 00:01:34.329 --rc genhtml_legend=1 00:01:34.329 --rc geninfo_all_blocks=1 00:01:34.329 ' 00:01:34.329 00:35:26 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:01:34.329 --rc lcov_branch_coverage=1 00:01:34.329 --rc lcov_function_coverage=1 00:01:34.329 --rc genhtml_branch_coverage=1 00:01:34.329 --rc genhtml_function_coverage=1 00:01:34.329 --rc genhtml_legend=1 00:01:34.329 --rc geninfo_all_blocks=1 00:01:34.329 --no-external' 00:01:34.329 00:35:26 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:01:34.329 --rc lcov_branch_coverage=1 00:01:34.329 --rc lcov_function_coverage=1 00:01:34.329 --rc genhtml_branch_coverage=1 00:01:34.329 --rc genhtml_function_coverage=1 00:01:34.329 --rc genhtml_legend=1 00:01:34.329 --rc geninfo_all_blocks=1 00:01:34.329 --no-external' 00:01:34.330 00:35:26 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:01:34.330 lcov: LCOV version 1.14 00:01:34.330 00:35:26 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/dsa-phy-autotest/spdk -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/config.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/event.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/file.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/init.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/json.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:01:38.549 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:01:38.549 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/log.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/string.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/version.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/util.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:01:38.550 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:01:38.550 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:01:40.456 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:01:40.456 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:01:45.732 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:01:45.732 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:01:45.732 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:01:45.732 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:01:45.732 /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:01:45.733 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:01:49.929 00:35:42 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:01:49.929 00:35:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:01:49.929 00:35:42 -- common/autotest_common.sh@10 -- # set +x 00:01:49.929 00:35:42 -- spdk/autotest.sh@91 -- # rm -f 00:01:49.929 00:35:42 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:01:52.475 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:01:52.475 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.475 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.475 0000:cb:00.0 (8086 0a54): Already using the nvme driver 00:01:52.475 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.475 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:01:52.475 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.475 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:01:52.475 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.475 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:01:52.475 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.735 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:01:52.735 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:01:52.735 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:01:52.735 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:01:52.735 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.735 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:01:52.735 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:01:52.735 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:01:52.735 00:35:45 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:01:52.735 00:35:45 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:01:52.735 00:35:45 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:01:52.735 00:35:45 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:01:52.735 00:35:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:52.735 00:35:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:01:52.735 00:35:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:52.735 00:35:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:52.735 00:35:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:01:52.735 00:35:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:52.735 00:35:45 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:52.735 00:35:45 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:01:52.735 00:35:45 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:01:52.735 00:35:45 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:52.735 00:35:45 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:01:52.735 00:35:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:01:52.735 00:35:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:01:52.735 00:35:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:01:52.735 00:35:45 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:01:52.735 00:35:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:01:52.735 No valid GPT data, bailing 00:01:52.735 00:35:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:01:52.994 00:35:45 -- scripts/common.sh@391 -- # pt= 00:01:52.994 00:35:45 -- scripts/common.sh@392 -- # return 1 00:01:52.994 00:35:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:01:52.994 1+0 records in 00:01:52.994 1+0 records out 00:01:52.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00213959 s, 490 MB/s 00:01:52.994 00:35:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:01:52.994 00:35:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:01:52.994 00:35:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:01:52.994 00:35:45 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:01:52.994 00:35:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:01:52.994 No valid GPT data, bailing 00:01:52.994 00:35:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:01:52.994 00:35:45 -- scripts/common.sh@391 -- # pt= 00:01:52.994 00:35:45 -- scripts/common.sh@392 -- # return 1 00:01:52.994 00:35:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:01:52.994 1+0 records in 00:01:52.994 1+0 records out 00:01:52.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00217289 s, 483 MB/s 00:01:52.994 00:35:45 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:01:52.994 00:35:45 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:01:52.994 00:35:45 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme2n1 00:01:52.994 00:35:45 -- scripts/common.sh@378 -- # local block=/dev/nvme2n1 pt 00:01:52.994 00:35:45 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:01:52.994 No valid GPT data, bailing 00:01:52.994 00:35:45 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:01:52.994 00:35:45 -- scripts/common.sh@391 -- # pt= 00:01:52.994 00:35:45 -- scripts/common.sh@392 -- # return 1 00:01:52.994 00:35:45 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:01:52.994 1+0 records in 00:01:52.994 1+0 records out 00:01:52.994 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00185945 s, 564 MB/s 00:01:52.994 00:35:45 -- spdk/autotest.sh@118 -- # sync 00:01:52.994 00:35:45 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:01:52.994 00:35:45 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:01:52.994 00:35:45 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:01:58.280 00:35:50 -- spdk/autotest.sh@124 -- # uname -s 00:01:58.280 00:35:50 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:01:58.280 00:35:50 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:01:58.280 00:35:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:01:58.280 00:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:01:58.280 00:35:50 -- common/autotest_common.sh@10 -- # set +x 00:01:58.280 ************************************ 00:01:58.280 START TEST setup.sh 00:01:58.280 ************************************ 00:01:58.280 00:35:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/test-setup.sh 00:01:58.280 * Looking for test storage... 00:01:58.280 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:01:58.280 00:35:50 -- setup/test-setup.sh@10 -- # uname -s 00:01:58.280 00:35:50 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:01:58.280 00:35:50 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:01:58.280 00:35:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:01:58.280 00:35:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:01:58.280 00:35:50 -- common/autotest_common.sh@10 -- # set +x 00:01:58.280 ************************************ 00:01:58.280 START TEST acl 00:01:58.280 ************************************ 00:01:58.280 00:35:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/acl.sh 00:01:58.280 * Looking for test storage... 00:01:58.280 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:01:58.280 00:35:50 -- setup/acl.sh@10 -- # get_zoned_devs 00:01:58.280 00:35:50 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:01:58.280 00:35:50 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:01:58.280 00:35:50 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:01:58.280 00:35:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:58.280 00:35:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:01:58.280 00:35:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:58.280 00:35:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:58.280 00:35:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:01:58.280 00:35:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:58.280 00:35:50 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:01:58.280 00:35:50 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:01:58.280 00:35:50 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:01:58.280 00:35:50 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:01:58.280 00:35:50 -- setup/acl.sh@12 -- # devs=() 00:01:58.280 00:35:50 -- setup/acl.sh@12 -- # declare -a devs 00:01:58.280 00:35:50 -- setup/acl.sh@13 -- # drivers=() 00:01:58.280 00:35:50 -- setup/acl.sh@13 -- # declare -A drivers 00:01:58.280 00:35:50 -- setup/acl.sh@51 -- # setup reset 00:01:58.280 00:35:50 -- setup/common.sh@9 -- # [[ reset == output ]] 00:01:58.280 00:35:50 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:01.579 00:35:53 -- setup/acl.sh@52 -- # collect_setup_devs 00:02:01.579 00:35:53 -- setup/acl.sh@16 -- # local dev driver 00:02:01.579 00:35:53 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:01.579 00:35:53 -- setup/acl.sh@15 -- # setup output status 00:02:01.579 00:35:53 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:01.579 00:35:53 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:02:04.123 Hugepages 00:02:04.123 node hugesize free / total 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:02:04.123 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:6a:01.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:6a:02.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:6f:01.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:6f:02.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:74:01.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:74:02.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:79:01.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:79:02.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.123 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:c9:00.0 == *:*:*.* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:04.123 00:35:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:02:04.123 00:35:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:04.123 00:35:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:04.123 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:ca:00.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:04.385 00:35:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:cb:00.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\c\b\:\0\0\.\0* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@22 -- # devs+=("$dev") 00:02:04.385 00:35:56 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:e7:01.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:e7:02.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:ec:01.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:ec:02.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:f1:01.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:f1:02.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:f6:01.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@19 -- # [[ 0000:f6:02.0 == *:*:*.* ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # [[ idxd == nvme ]] 00:02:04.385 00:35:56 -- setup/acl.sh@20 -- # continue 00:02:04.385 00:35:56 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:02:04.385 00:35:56 -- setup/acl.sh@24 -- # (( 3 > 0 )) 00:02:04.385 00:35:56 -- setup/acl.sh@54 -- # run_test denied denied 00:02:04.385 00:35:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:04.385 00:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:04.385 00:35:56 -- common/autotest_common.sh@10 -- # set +x 00:02:04.647 ************************************ 00:02:04.647 START TEST denied 00:02:04.647 ************************************ 00:02:04.647 00:35:57 -- common/autotest_common.sh@1111 -- # denied 00:02:04.647 00:35:57 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:c9:00.0' 00:02:04.647 00:35:57 -- setup/acl.sh@38 -- # setup output config 00:02:04.647 00:35:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:04.647 00:35:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:04.647 00:35:57 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:c9:00.0' 00:02:09.941 0000:c9:00.0 (8086 0a54): Skipping denied controller at 0000:c9:00.0 00:02:09.941 00:36:02 -- setup/acl.sh@40 -- # verify 0000:c9:00.0 00:02:09.941 00:36:02 -- setup/acl.sh@28 -- # local dev driver 00:02:09.941 00:36:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:09.941 00:36:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:c9:00.0 ]] 00:02:09.941 00:36:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:c9:00.0/driver 00:02:09.941 00:36:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:09.941 00:36:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:09.941 00:36:02 -- setup/acl.sh@41 -- # setup reset 00:02:09.941 00:36:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:09.941 00:36:02 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:14.163 00:02:14.163 real 0m9.663s 00:02:14.163 user 0m2.121s 00:02:14.163 sys 0m4.163s 00:02:14.163 00:36:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:14.163 00:36:06 -- common/autotest_common.sh@10 -- # set +x 00:02:14.163 ************************************ 00:02:14.163 END TEST denied 00:02:14.163 ************************************ 00:02:14.163 00:36:06 -- setup/acl.sh@55 -- # run_test allowed allowed 00:02:14.163 00:36:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:14.163 00:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:14.163 00:36:06 -- common/autotest_common.sh@10 -- # set +x 00:02:14.469 ************************************ 00:02:14.469 START TEST allowed 00:02:14.469 ************************************ 00:02:14.469 00:36:06 -- common/autotest_common.sh@1111 -- # allowed 00:02:14.469 00:36:06 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:c9:00.0 00:02:14.469 00:36:06 -- setup/acl.sh@46 -- # grep -E '0000:c9:00.0 .*: nvme -> .*' 00:02:14.469 00:36:06 -- setup/acl.sh@45 -- # setup output config 00:02:14.469 00:36:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:14.469 00:36:06 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:19.767 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:19.767 00:36:11 -- setup/acl.sh@47 -- # verify 0000:ca:00.0 0000:cb:00.0 00:02:19.767 00:36:11 -- setup/acl.sh@28 -- # local dev driver 00:02:19.767 00:36:11 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:19.767 00:36:11 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:ca:00.0 ]] 00:02:19.767 00:36:11 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:ca:00.0/driver 00:02:19.767 00:36:11 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:19.767 00:36:11 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:19.767 00:36:11 -- setup/acl.sh@30 -- # for dev in "$@" 00:02:19.767 00:36:11 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:cb:00.0 ]] 00:02:19.767 00:36:11 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:cb:00.0/driver 00:02:19.767 00:36:11 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:02:19.767 00:36:11 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:02:19.767 00:36:11 -- setup/acl.sh@48 -- # setup reset 00:02:19.767 00:36:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:19.767 00:36:11 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:23.068 00:02:23.068 real 0m8.357s 00:02:23.068 user 0m2.003s 00:02:23.068 sys 0m4.095s 00:02:23.068 00:36:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:23.068 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:02:23.068 ************************************ 00:02:23.068 END TEST allowed 00:02:23.068 ************************************ 00:02:23.068 00:02:23.068 real 0m24.744s 00:02:23.068 user 0m6.439s 00:02:23.068 sys 0m12.449s 00:02:23.068 00:36:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:23.068 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:02:23.068 ************************************ 00:02:23.068 END TEST acl 00:02:23.068 ************************************ 00:02:23.068 00:36:15 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:23.068 00:36:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.068 00:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.068 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:02:23.068 ************************************ 00:02:23.068 START TEST hugepages 00:02:23.068 ************************************ 00:02:23.068 00:36:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/hugepages.sh 00:02:23.068 * Looking for test storage... 00:02:23.068 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:23.068 00:36:15 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:02:23.068 00:36:15 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:02:23.069 00:36:15 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:02:23.069 00:36:15 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:02:23.069 00:36:15 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:02:23.069 00:36:15 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:02:23.069 00:36:15 -- setup/common.sh@17 -- # local get=Hugepagesize 00:02:23.069 00:36:15 -- setup/common.sh@18 -- # local node= 00:02:23.069 00:36:15 -- setup/common.sh@19 -- # local var val 00:02:23.069 00:36:15 -- setup/common.sh@20 -- # local mem_f mem 00:02:23.069 00:36:15 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:23.069 00:36:15 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:23.069 00:36:15 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:23.069 00:36:15 -- setup/common.sh@28 -- # mapfile -t mem 00:02:23.069 00:36:15 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 239710040 kB' 'MemAvailable: 243372896 kB' 'Buffers: 2696 kB' 'Cached: 10578612 kB' 'SwapCached: 0 kB' 'Active: 6647636 kB' 'Inactive: 4387964 kB' 'Active(anon): 6080832 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463568 kB' 'Mapped: 165156 kB' 'Shmem: 5626540 kB' 'KReclaimable: 374280 kB' 'Slab: 988808 kB' 'SReclaimable: 374280 kB' 'SUnreclaim: 614528 kB' 'KernelStack: 24816 kB' 'PageTables: 8092 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 135570668 kB' 'Committed_AS: 7598316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329200 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.069 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.069 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # continue 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # IFS=': ' 00:02:23.070 00:36:15 -- setup/common.sh@31 -- # read -r var val _ 00:02:23.070 00:36:15 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:02:23.070 00:36:15 -- setup/common.sh@33 -- # echo 2048 00:02:23.070 00:36:15 -- setup/common.sh@33 -- # return 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:02:23.070 00:36:15 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:02:23.070 00:36:15 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:02:23.070 00:36:15 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:02:23.070 00:36:15 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:02:23.070 00:36:15 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:02:23.070 00:36:15 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:02:23.070 00:36:15 -- setup/hugepages.sh@207 -- # get_nodes 00:02:23.070 00:36:15 -- setup/hugepages.sh@27 -- # local node 00:02:23.070 00:36:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:23.070 00:36:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:02:23.070 00:36:15 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:23.070 00:36:15 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:23.070 00:36:15 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:23.070 00:36:15 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:23.070 00:36:15 -- setup/hugepages.sh@208 -- # clear_hp 00:02:23.070 00:36:15 -- setup/hugepages.sh@37 -- # local node hp 00:02:23.070 00:36:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:23.070 00:36:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.070 00:36:15 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.070 00:36:15 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:23.070 00:36:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.070 00:36:15 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:23.070 00:36:15 -- setup/hugepages.sh@41 -- # echo 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:23.070 00:36:15 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:23.070 00:36:15 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:02:23.070 00:36:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:23.070 00:36:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:23.070 00:36:15 -- common/autotest_common.sh@10 -- # set +x 00:02:23.070 ************************************ 00:02:23.070 START TEST default_setup 00:02:23.070 ************************************ 00:02:23.070 00:36:15 -- common/autotest_common.sh@1111 -- # default_setup 00:02:23.070 00:36:15 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:23.070 00:36:15 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:23.070 00:36:15 -- setup/hugepages.sh@51 -- # shift 00:02:23.070 00:36:15 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:23.070 00:36:15 -- setup/hugepages.sh@52 -- # local node_ids 00:02:23.070 00:36:15 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:23.070 00:36:15 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:23.070 00:36:15 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:23.070 00:36:15 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:23.070 00:36:15 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:23.070 00:36:15 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:23.070 00:36:15 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:23.070 00:36:15 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:23.070 00:36:15 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:23.070 00:36:15 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:23.070 00:36:15 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:23.070 00:36:15 -- setup/hugepages.sh@73 -- # return 0 00:02:23.070 00:36:15 -- setup/hugepages.sh@137 -- # setup output 00:02:23.070 00:36:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:23.070 00:36:15 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:26.371 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:02:26.371 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:02:26.371 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:02:27.755 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.016 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.279 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:02:28.279 00:36:20 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:02:28.279 00:36:20 -- setup/hugepages.sh@89 -- # local node 00:02:28.279 00:36:20 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:28.279 00:36:20 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:28.279 00:36:20 -- setup/hugepages.sh@92 -- # local surp 00:02:28.279 00:36:20 -- setup/hugepages.sh@93 -- # local resv 00:02:28.279 00:36:20 -- setup/hugepages.sh@94 -- # local anon 00:02:28.279 00:36:20 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:28.279 00:36:20 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:28.279 00:36:20 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:28.279 00:36:20 -- setup/common.sh@18 -- # local node= 00:02:28.279 00:36:20 -- setup/common.sh@19 -- # local var val 00:02:28.279 00:36:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.279 00:36:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.279 00:36:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.279 00:36:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.279 00:36:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.279 00:36:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242070592 kB' 'MemAvailable: 245732336 kB' 'Buffers: 2696 kB' 'Cached: 10578864 kB' 'SwapCached: 0 kB' 'Active: 6675984 kB' 'Inactive: 4387964 kB' 'Active(anon): 6109180 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491872 kB' 'Mapped: 165100 kB' 'Shmem: 5626792 kB' 'KReclaimable: 372056 kB' 'Slab: 976676 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 604620 kB' 'KernelStack: 24800 kB' 'PageTables: 9528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7665812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329184 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.279 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.279 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:28.280 00:36:20 -- setup/common.sh@33 -- # echo 0 00:02:28.280 00:36:20 -- setup/common.sh@33 -- # return 0 00:02:28.280 00:36:20 -- setup/hugepages.sh@97 -- # anon=0 00:02:28.280 00:36:20 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:28.280 00:36:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.280 00:36:20 -- setup/common.sh@18 -- # local node= 00:02:28.280 00:36:20 -- setup/common.sh@19 -- # local var val 00:02:28.280 00:36:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.280 00:36:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.280 00:36:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.280 00:36:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.280 00:36:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.280 00:36:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242069584 kB' 'MemAvailable: 245731328 kB' 'Buffers: 2696 kB' 'Cached: 10578864 kB' 'SwapCached: 0 kB' 'Active: 6676256 kB' 'Inactive: 4387964 kB' 'Active(anon): 6109452 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492648 kB' 'Mapped: 165100 kB' 'Shmem: 5626792 kB' 'KReclaimable: 372056 kB' 'Slab: 976676 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 604620 kB' 'KernelStack: 24800 kB' 'PageTables: 9548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7664308 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329104 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.280 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.280 00:36:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.281 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.281 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.545 00:36:20 -- setup/common.sh@33 -- # echo 0 00:02:28.545 00:36:20 -- setup/common.sh@33 -- # return 0 00:02:28.545 00:36:20 -- setup/hugepages.sh@99 -- # surp=0 00:02:28.545 00:36:20 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:28.545 00:36:20 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:28.545 00:36:20 -- setup/common.sh@18 -- # local node= 00:02:28.545 00:36:20 -- setup/common.sh@19 -- # local var val 00:02:28.545 00:36:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.545 00:36:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.545 00:36:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.545 00:36:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.545 00:36:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.545 00:36:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242069796 kB' 'MemAvailable: 245731540 kB' 'Buffers: 2696 kB' 'Cached: 10578864 kB' 'SwapCached: 0 kB' 'Active: 6676812 kB' 'Inactive: 4387964 kB' 'Active(anon): 6110008 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492700 kB' 'Mapped: 165100 kB' 'Shmem: 5626792 kB' 'KReclaimable: 372056 kB' 'Slab: 976736 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 604680 kB' 'KernelStack: 24896 kB' 'PageTables: 9592 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7665840 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329216 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.545 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.545 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # continue 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.546 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.546 00:36:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:28.546 00:36:20 -- setup/common.sh@33 -- # echo 0 00:02:28.546 00:36:20 -- setup/common.sh@33 -- # return 0 00:02:28.547 00:36:20 -- setup/hugepages.sh@100 -- # resv=0 00:02:28.547 00:36:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:28.547 nr_hugepages=1024 00:02:28.547 00:36:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:28.547 resv_hugepages=0 00:02:28.547 00:36:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:28.547 surplus_hugepages=0 00:02:28.547 00:36:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:28.547 anon_hugepages=0 00:02:28.547 00:36:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.547 00:36:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:28.547 00:36:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:28.547 00:36:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:28.547 00:36:20 -- setup/common.sh@18 -- # local node= 00:02:28.547 00:36:20 -- setup/common.sh@19 -- # local var val 00:02:28.547 00:36:20 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.547 00:36:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.547 00:36:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:28.547 00:36:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:28.547 00:36:20 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.547 00:36:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.547 00:36:20 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:20 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242068160 kB' 'MemAvailable: 245729904 kB' 'Buffers: 2696 kB' 'Cached: 10578868 kB' 'SwapCached: 0 kB' 'Active: 6676520 kB' 'Inactive: 4387964 kB' 'Active(anon): 6109716 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 492468 kB' 'Mapped: 165108 kB' 'Shmem: 5626796 kB' 'KReclaimable: 372056 kB' 'Slab: 976736 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 604680 kB' 'KernelStack: 24768 kB' 'PageTables: 9256 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7664836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329120 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.547 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.547 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:28.548 00:36:21 -- setup/common.sh@33 -- # echo 1024 00:02:28.548 00:36:21 -- setup/common.sh@33 -- # return 0 00:02:28.548 00:36:21 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:28.548 00:36:21 -- setup/hugepages.sh@112 -- # get_nodes 00:02:28.548 00:36:21 -- setup/hugepages.sh@27 -- # local node 00:02:28.548 00:36:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.548 00:36:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:28.548 00:36:21 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:28.548 00:36:21 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:28.548 00:36:21 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:28.548 00:36:21 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:28.548 00:36:21 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:28.548 00:36:21 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:28.548 00:36:21 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:28.548 00:36:21 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:28.548 00:36:21 -- setup/common.sh@18 -- # local node=0 00:02:28.548 00:36:21 -- setup/common.sh@19 -- # local var val 00:02:28.548 00:36:21 -- setup/common.sh@20 -- # local mem_f mem 00:02:28.548 00:36:21 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:28.548 00:36:21 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:28.548 00:36:21 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:28.548 00:36:21 -- setup/common.sh@28 -- # mapfile -t mem 00:02:28.548 00:36:21 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 125361136 kB' 'MemUsed: 6455088 kB' 'SwapCached: 0 kB' 'Active: 2203880 kB' 'Inactive: 118148 kB' 'Active(anon): 1803252 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2042988 kB' 'Mapped: 127592 kB' 'AnonPages: 288232 kB' 'Shmem: 1524212 kB' 'KernelStack: 13096 kB' 'PageTables: 6672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158684 kB' 'Slab: 480120 kB' 'SReclaimable: 158684 kB' 'SUnreclaim: 321436 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.548 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.548 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # continue 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # IFS=': ' 00:02:28.549 00:36:21 -- setup/common.sh@31 -- # read -r var val _ 00:02:28.549 00:36:21 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:28.549 00:36:21 -- setup/common.sh@33 -- # echo 0 00:02:28.549 00:36:21 -- setup/common.sh@33 -- # return 0 00:02:28.549 00:36:21 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:28.549 00:36:21 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:28.549 00:36:21 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:28.549 00:36:21 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:28.549 00:36:21 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:28.549 node0=1024 expecting 1024 00:02:28.549 00:36:21 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:28.549 00:02:28.549 real 0m5.434s 00:02:28.549 user 0m1.105s 00:02:28.549 sys 0m2.059s 00:02:28.549 00:36:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:28.549 00:36:21 -- common/autotest_common.sh@10 -- # set +x 00:02:28.549 ************************************ 00:02:28.549 END TEST default_setup 00:02:28.549 ************************************ 00:02:28.549 00:36:21 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:02:28.549 00:36:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:28.549 00:36:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:28.549 00:36:21 -- common/autotest_common.sh@10 -- # set +x 00:02:28.549 ************************************ 00:02:28.549 START TEST per_node_1G_alloc 00:02:28.549 ************************************ 00:02:28.549 00:36:21 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:02:28.549 00:36:21 -- setup/hugepages.sh@143 -- # local IFS=, 00:02:28.549 00:36:21 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:02:28.549 00:36:21 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:28.549 00:36:21 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:02:28.549 00:36:21 -- setup/hugepages.sh@51 -- # shift 00:02:28.549 00:36:21 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:02:28.549 00:36:21 -- setup/hugepages.sh@52 -- # local node_ids 00:02:28.549 00:36:21 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:28.549 00:36:21 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:28.549 00:36:21 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:02:28.549 00:36:21 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:02:28.549 00:36:21 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:28.549 00:36:21 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:28.549 00:36:21 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:28.549 00:36:21 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:28.549 00:36:21 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:28.549 00:36:21 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:02:28.549 00:36:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:28.549 00:36:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:28.549 00:36:21 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:28.549 00:36:21 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:02:28.549 00:36:21 -- setup/hugepages.sh@73 -- # return 0 00:02:28.549 00:36:21 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:02:28.549 00:36:21 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:02:28.549 00:36:21 -- setup/hugepages.sh@146 -- # setup output 00:02:28.549 00:36:21 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:28.549 00:36:21 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:31.098 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.098 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.098 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.098 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.098 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.098 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.098 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.098 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.098 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.361 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.361 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:31.361 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.361 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:31.361 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:31.361 00:36:23 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:02:31.361 00:36:23 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:02:31.361 00:36:23 -- setup/hugepages.sh@89 -- # local node 00:02:31.361 00:36:23 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:31.361 00:36:23 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:31.361 00:36:23 -- setup/hugepages.sh@92 -- # local surp 00:02:31.361 00:36:23 -- setup/hugepages.sh@93 -- # local resv 00:02:31.361 00:36:23 -- setup/hugepages.sh@94 -- # local anon 00:02:31.361 00:36:23 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:31.361 00:36:23 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:31.361 00:36:23 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:31.361 00:36:23 -- setup/common.sh@18 -- # local node= 00:02:31.361 00:36:23 -- setup/common.sh@19 -- # local var val 00:02:31.361 00:36:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.361 00:36:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.361 00:36:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.361 00:36:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.361 00:36:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.361 00:36:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.361 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.361 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.361 00:36:23 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242044800 kB' 'MemAvailable: 245706544 kB' 'Buffers: 2696 kB' 'Cached: 10578976 kB' 'SwapCached: 0 kB' 'Active: 6681076 kB' 'Inactive: 4387964 kB' 'Active(anon): 6114272 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496712 kB' 'Mapped: 166076 kB' 'Shmem: 5626904 kB' 'KReclaimable: 372056 kB' 'Slab: 977360 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 605304 kB' 'KernelStack: 24768 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7674668 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329156 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:31.361 00:36:23 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.361 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.361 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.361 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.361 00:36:23 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.361 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # continue 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.362 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.362 00:36:23 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:31.362 00:36:23 -- setup/common.sh@33 -- # echo 0 00:02:31.362 00:36:23 -- setup/common.sh@33 -- # return 0 00:02:31.362 00:36:23 -- setup/hugepages.sh@97 -- # anon=0 00:02:31.362 00:36:23 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:31.362 00:36:23 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.362 00:36:23 -- setup/common.sh@18 -- # local node= 00:02:31.362 00:36:23 -- setup/common.sh@19 -- # local var val 00:02:31.362 00:36:23 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.362 00:36:23 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.362 00:36:23 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.363 00:36:23 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.363 00:36:23 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.363 00:36:23 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.363 00:36:23 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:23 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242054756 kB' 'MemAvailable: 245716500 kB' 'Buffers: 2696 kB' 'Cached: 10578980 kB' 'SwapCached: 0 kB' 'Active: 6680948 kB' 'Inactive: 4387964 kB' 'Active(anon): 6114144 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496536 kB' 'Mapped: 166056 kB' 'Shmem: 5626908 kB' 'KReclaimable: 372056 kB' 'Slab: 977344 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 605288 kB' 'KernelStack: 24880 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7674680 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329204 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.363 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.363 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.364 00:36:24 -- setup/common.sh@33 -- # echo 0 00:02:31.364 00:36:24 -- setup/common.sh@33 -- # return 0 00:02:31.364 00:36:24 -- setup/hugepages.sh@99 -- # surp=0 00:02:31.364 00:36:24 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:31.364 00:36:24 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:31.364 00:36:24 -- setup/common.sh@18 -- # local node= 00:02:31.364 00:36:24 -- setup/common.sh@19 -- # local var val 00:02:31.364 00:36:24 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.364 00:36:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.364 00:36:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.364 00:36:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.364 00:36:24 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.364 00:36:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242057320 kB' 'MemAvailable: 245719064 kB' 'Buffers: 2696 kB' 'Cached: 10578992 kB' 'SwapCached: 0 kB' 'Active: 6675696 kB' 'Inactive: 4387964 kB' 'Active(anon): 6108892 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491296 kB' 'Mapped: 165476 kB' 'Shmem: 5626920 kB' 'KReclaimable: 372056 kB' 'Slab: 977336 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 605280 kB' 'KernelStack: 24896 kB' 'PageTables: 9800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7666584 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329248 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.364 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.364 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.365 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:31.365 00:36:24 -- setup/common.sh@33 -- # echo 0 00:02:31.365 00:36:24 -- setup/common.sh@33 -- # return 0 00:02:31.365 00:36:24 -- setup/hugepages.sh@100 -- # resv=0 00:02:31.365 00:36:24 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:31.365 nr_hugepages=1024 00:02:31.365 00:36:24 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:31.365 resv_hugepages=0 00:02:31.365 00:36:24 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:31.365 surplus_hugepages=0 00:02:31.365 00:36:24 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:31.365 anon_hugepages=0 00:02:31.365 00:36:24 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.365 00:36:24 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:31.365 00:36:24 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:31.365 00:36:24 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:31.365 00:36:24 -- setup/common.sh@18 -- # local node= 00:02:31.365 00:36:24 -- setup/common.sh@19 -- # local var val 00:02:31.365 00:36:24 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.365 00:36:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.365 00:36:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:31.365 00:36:24 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:31.365 00:36:24 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.365 00:36:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.365 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242057292 kB' 'MemAvailable: 245719036 kB' 'Buffers: 2696 kB' 'Cached: 10579008 kB' 'SwapCached: 0 kB' 'Active: 6675724 kB' 'Inactive: 4387964 kB' 'Active(anon): 6108920 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 491088 kB' 'Mapped: 165132 kB' 'Shmem: 5626936 kB' 'KReclaimable: 372056 kB' 'Slab: 977240 kB' 'SReclaimable: 372056 kB' 'SUnreclaim: 605184 kB' 'KernelStack: 24928 kB' 'PageTables: 9600 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7666848 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329216 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.366 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.366 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.367 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:31.367 00:36:24 -- setup/common.sh@33 -- # echo 1024 00:02:31.367 00:36:24 -- setup/common.sh@33 -- # return 0 00:02:31.367 00:36:24 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:31.367 00:36:24 -- setup/hugepages.sh@112 -- # get_nodes 00:02:31.367 00:36:24 -- setup/hugepages.sh@27 -- # local node 00:02:31.367 00:36:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.367 00:36:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.367 00:36:24 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:31.367 00:36:24 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:31.367 00:36:24 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:31.367 00:36:24 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:31.367 00:36:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:31.367 00:36:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:31.367 00:36:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:31.367 00:36:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.367 00:36:24 -- setup/common.sh@18 -- # local node=0 00:02:31.367 00:36:24 -- setup/common.sh@19 -- # local var val 00:02:31.367 00:36:24 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.367 00:36:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.367 00:36:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:31.367 00:36:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:31.367 00:36:24 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.367 00:36:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.367 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 126416164 kB' 'MemUsed: 5400060 kB' 'SwapCached: 0 kB' 'Active: 2202940 kB' 'Inactive: 118148 kB' 'Active(anon): 1802312 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043128 kB' 'Mapped: 127628 kB' 'AnonPages: 287040 kB' 'Shmem: 1524352 kB' 'KernelStack: 12984 kB' 'PageTables: 6328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158684 kB' 'Slab: 479716 kB' 'SReclaimable: 158684 kB' 'SUnreclaim: 321032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.630 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.630 00:36:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@33 -- # echo 0 00:02:31.631 00:36:24 -- setup/common.sh@33 -- # return 0 00:02:31.631 00:36:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:31.631 00:36:24 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:31.631 00:36:24 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:31.631 00:36:24 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:31.631 00:36:24 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:31.631 00:36:24 -- setup/common.sh@18 -- # local node=1 00:02:31.631 00:36:24 -- setup/common.sh@19 -- # local var val 00:02:31.631 00:36:24 -- setup/common.sh@20 -- # local mem_f mem 00:02:31.631 00:36:24 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:31.631 00:36:24 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:31.631 00:36:24 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:31.631 00:36:24 -- setup/common.sh@28 -- # mapfile -t mem 00:02:31.631 00:36:24 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742212 kB' 'MemFree: 115641380 kB' 'MemUsed: 11100832 kB' 'SwapCached: 0 kB' 'Active: 4472612 kB' 'Inactive: 4269816 kB' 'Active(anon): 4306436 kB' 'Inactive(anon): 0 kB' 'Active(file): 166176 kB' 'Inactive(file): 4269816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8538588 kB' 'Mapped: 37504 kB' 'AnonPages: 203860 kB' 'Shmem: 4102596 kB' 'KernelStack: 11912 kB' 'PageTables: 2908 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 213372 kB' 'Slab: 497524 kB' 'SReclaimable: 213372 kB' 'SUnreclaim: 284152 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.631 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.631 00:36:24 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # continue 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # IFS=': ' 00:02:31.632 00:36:24 -- setup/common.sh@31 -- # read -r var val _ 00:02:31.632 00:36:24 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:31.632 00:36:24 -- setup/common.sh@33 -- # echo 0 00:02:31.632 00:36:24 -- setup/common.sh@33 -- # return 0 00:02:31.632 00:36:24 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:31.632 00:36:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:31.632 00:36:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:31.632 00:36:24 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:31.632 node0=512 expecting 512 00:02:31.632 00:36:24 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:31.632 00:36:24 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:31.632 00:36:24 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:31.632 00:36:24 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:31.632 node1=512 expecting 512 00:02:31.632 00:36:24 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:31.632 00:02:31.632 real 0m2.933s 00:02:31.632 user 0m0.971s 00:02:31.632 sys 0m1.815s 00:02:31.632 00:36:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:31.632 00:36:24 -- common/autotest_common.sh@10 -- # set +x 00:02:31.632 ************************************ 00:02:31.632 END TEST per_node_1G_alloc 00:02:31.632 ************************************ 00:02:31.632 00:36:24 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:02:31.632 00:36:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:31.632 00:36:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:31.632 00:36:24 -- common/autotest_common.sh@10 -- # set +x 00:02:31.632 ************************************ 00:02:31.632 START TEST even_2G_alloc 00:02:31.632 ************************************ 00:02:31.632 00:36:24 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:02:31.632 00:36:24 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:02:31.632 00:36:24 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:31.632 00:36:24 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:31.632 00:36:24 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:31.632 00:36:24 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:31.632 00:36:24 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:31.632 00:36:24 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:31.632 00:36:24 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:31.632 00:36:24 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:31.632 00:36:24 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:31.632 00:36:24 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:31.632 00:36:24 -- setup/hugepages.sh@83 -- # : 512 00:02:31.632 00:36:24 -- setup/hugepages.sh@84 -- # : 1 00:02:31.632 00:36:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:31.632 00:36:24 -- setup/hugepages.sh@83 -- # : 0 00:02:31.632 00:36:24 -- setup/hugepages.sh@84 -- # : 0 00:02:31.632 00:36:24 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:31.632 00:36:24 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:02:31.632 00:36:24 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:02:31.632 00:36:24 -- setup/hugepages.sh@153 -- # setup output 00:02:31.632 00:36:24 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:31.632 00:36:24 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:34.939 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.939 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.939 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:34.939 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:34.939 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:34.939 00:36:27 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:02:34.939 00:36:27 -- setup/hugepages.sh@89 -- # local node 00:02:34.939 00:36:27 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:34.939 00:36:27 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:34.939 00:36:27 -- setup/hugepages.sh@92 -- # local surp 00:02:34.939 00:36:27 -- setup/hugepages.sh@93 -- # local resv 00:02:34.939 00:36:27 -- setup/hugepages.sh@94 -- # local anon 00:02:34.939 00:36:27 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:34.939 00:36:27 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:34.939 00:36:27 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:34.939 00:36:27 -- setup/common.sh@18 -- # local node= 00:02:34.939 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.939 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.939 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.939 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.939 00:36:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.939 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.939 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.939 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.939 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242082956 kB' 'MemAvailable: 245744692 kB' 'Buffers: 2696 kB' 'Cached: 10579112 kB' 'SwapCached: 0 kB' 'Active: 6665952 kB' 'Inactive: 4387964 kB' 'Active(anon): 6099148 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481412 kB' 'Mapped: 164232 kB' 'Shmem: 5627040 kB' 'KReclaimable: 372040 kB' 'Slab: 976260 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604220 kB' 'KernelStack: 24640 kB' 'PageTables: 8324 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609728 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328992 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.940 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.940 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:34.941 00:36:27 -- setup/common.sh@33 -- # echo 0 00:02:34.941 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.941 00:36:27 -- setup/hugepages.sh@97 -- # anon=0 00:02:34.941 00:36:27 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:34.941 00:36:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.941 00:36:27 -- setup/common.sh@18 -- # local node= 00:02:34.941 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.941 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.941 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.941 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.941 00:36:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.941 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.941 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242082788 kB' 'MemAvailable: 245744524 kB' 'Buffers: 2696 kB' 'Cached: 10579116 kB' 'SwapCached: 0 kB' 'Active: 6666252 kB' 'Inactive: 4387964 kB' 'Active(anon): 6099448 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481700 kB' 'Mapped: 164312 kB' 'Shmem: 5627044 kB' 'KReclaimable: 372040 kB' 'Slab: 976276 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604236 kB' 'KernelStack: 24592 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329024 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.941 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.941 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.942 00:36:27 -- setup/common.sh@33 -- # echo 0 00:02:34.942 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.942 00:36:27 -- setup/hugepages.sh@99 -- # surp=0 00:02:34.942 00:36:27 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:34.942 00:36:27 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:34.942 00:36:27 -- setup/common.sh@18 -- # local node= 00:02:34.942 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.942 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.942 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.942 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.942 00:36:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.942 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.942 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242085540 kB' 'MemAvailable: 245747276 kB' 'Buffers: 2696 kB' 'Cached: 10579116 kB' 'SwapCached: 0 kB' 'Active: 6666212 kB' 'Inactive: 4387964 kB' 'Active(anon): 6099408 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481564 kB' 'Mapped: 164304 kB' 'Shmem: 5627044 kB' 'KReclaimable: 372040 kB' 'Slab: 976212 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604172 kB' 'KernelStack: 24736 kB' 'PageTables: 8808 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609516 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329056 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.942 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.942 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.943 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:34.943 00:36:27 -- setup/common.sh@33 -- # echo 0 00:02:34.943 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.943 00:36:27 -- setup/hugepages.sh@100 -- # resv=0 00:02:34.943 00:36:27 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:34.943 nr_hugepages=1024 00:02:34.943 00:36:27 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:34.943 resv_hugepages=0 00:02:34.943 00:36:27 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:34.943 surplus_hugepages=0 00:02:34.943 00:36:27 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:34.943 anon_hugepages=0 00:02:34.943 00:36:27 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.943 00:36:27 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:34.943 00:36:27 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:34.943 00:36:27 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:34.943 00:36:27 -- setup/common.sh@18 -- # local node= 00:02:34.943 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.943 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.943 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.943 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:34.943 00:36:27 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:34.943 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.943 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.943 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242084284 kB' 'MemAvailable: 245746020 kB' 'Buffers: 2696 kB' 'Cached: 10579120 kB' 'SwapCached: 0 kB' 'Active: 6665948 kB' 'Inactive: 4387964 kB' 'Active(anon): 6099144 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481328 kB' 'Mapped: 164312 kB' 'Shmem: 5627048 kB' 'KReclaimable: 372040 kB' 'Slab: 976212 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604172 kB' 'KernelStack: 24896 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609768 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 329040 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.944 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.944 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:34.945 00:36:27 -- setup/common.sh@33 -- # echo 1024 00:02:34.945 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.945 00:36:27 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:34.945 00:36:27 -- setup/hugepages.sh@112 -- # get_nodes 00:02:34.945 00:36:27 -- setup/hugepages.sh@27 -- # local node 00:02:34.945 00:36:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.945 00:36:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.945 00:36:27 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:34.945 00:36:27 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:34.945 00:36:27 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:34.945 00:36:27 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:34.945 00:36:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.945 00:36:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.945 00:36:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:34.945 00:36:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.945 00:36:27 -- setup/common.sh@18 -- # local node=0 00:02:34.945 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.945 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.945 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.945 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:34.945 00:36:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:34.945 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.945 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 126430392 kB' 'MemUsed: 5385832 kB' 'SwapCached: 0 kB' 'Active: 2195296 kB' 'Inactive: 118148 kB' 'Active(anon): 1794668 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043240 kB' 'Mapped: 126700 kB' 'AnonPages: 279332 kB' 'Shmem: 1524464 kB' 'KernelStack: 12712 kB' 'PageTables: 5132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158668 kB' 'Slab: 479368 kB' 'SReclaimable: 158668 kB' 'SUnreclaim: 320700 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.945 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.945 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@33 -- # echo 0 00:02:34.946 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.946 00:36:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.946 00:36:27 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:34.946 00:36:27 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:34.946 00:36:27 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:34.946 00:36:27 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:34.946 00:36:27 -- setup/common.sh@18 -- # local node=1 00:02:34.946 00:36:27 -- setup/common.sh@19 -- # local var val 00:02:34.946 00:36:27 -- setup/common.sh@20 -- # local mem_f mem 00:02:34.946 00:36:27 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:34.946 00:36:27 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:34.946 00:36:27 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:34.946 00:36:27 -- setup/common.sh@28 -- # mapfile -t mem 00:02:34.946 00:36:27 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742212 kB' 'MemFree: 115653364 kB' 'MemUsed: 11088848 kB' 'SwapCached: 0 kB' 'Active: 4470484 kB' 'Inactive: 4269816 kB' 'Active(anon): 4304308 kB' 'Inactive(anon): 0 kB' 'Active(file): 166176 kB' 'Inactive(file): 4269816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8538592 kB' 'Mapped: 37512 kB' 'AnonPages: 201736 kB' 'Shmem: 4102600 kB' 'KernelStack: 12072 kB' 'PageTables: 3616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 213372 kB' 'Slab: 496836 kB' 'SReclaimable: 213372 kB' 'SUnreclaim: 283464 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.946 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.946 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # continue 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # IFS=': ' 00:02:34.947 00:36:27 -- setup/common.sh@31 -- # read -r var val _ 00:02:34.947 00:36:27 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:34.947 00:36:27 -- setup/common.sh@33 -- # echo 0 00:02:34.947 00:36:27 -- setup/common.sh@33 -- # return 0 00:02:34.947 00:36:27 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:34.947 00:36:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.947 00:36:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.947 00:36:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.947 00:36:27 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:34.947 node0=512 expecting 512 00:02:34.947 00:36:27 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:34.947 00:36:27 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:34.947 00:36:27 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:34.947 00:36:27 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:02:34.947 node1=512 expecting 512 00:02:34.947 00:36:27 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:02:34.947 00:02:34.947 real 0m2.988s 00:02:34.947 user 0m0.972s 00:02:34.947 sys 0m1.885s 00:02:34.947 00:36:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:34.947 00:36:27 -- common/autotest_common.sh@10 -- # set +x 00:02:34.947 ************************************ 00:02:34.947 END TEST even_2G_alloc 00:02:34.947 ************************************ 00:02:34.947 00:36:27 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:02:34.947 00:36:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:34.947 00:36:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:34.947 00:36:27 -- common/autotest_common.sh@10 -- # set +x 00:02:34.947 ************************************ 00:02:34.947 START TEST odd_alloc 00:02:34.947 ************************************ 00:02:34.947 00:36:27 -- common/autotest_common.sh@1111 -- # odd_alloc 00:02:34.947 00:36:27 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:02:34.947 00:36:27 -- setup/hugepages.sh@49 -- # local size=2098176 00:02:34.947 00:36:27 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:02:34.948 00:36:27 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:34.948 00:36:27 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:34.948 00:36:27 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:34.948 00:36:27 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:02:34.948 00:36:27 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:34.948 00:36:27 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:34.948 00:36:27 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:34.948 00:36:27 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:02:34.948 00:36:27 -- setup/hugepages.sh@83 -- # : 513 00:02:34.948 00:36:27 -- setup/hugepages.sh@84 -- # : 1 00:02:34.948 00:36:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:02:34.948 00:36:27 -- setup/hugepages.sh@83 -- # : 0 00:02:34.948 00:36:27 -- setup/hugepages.sh@84 -- # : 0 00:02:34.948 00:36:27 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:34.948 00:36:27 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:02:34.948 00:36:27 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:02:34.948 00:36:27 -- setup/hugepages.sh@160 -- # setup output 00:02:34.948 00:36:27 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:34.948 00:36:27 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:37.494 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.494 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.494 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:37.494 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:37.494 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:37.494 00:36:30 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:02:37.494 00:36:30 -- setup/hugepages.sh@89 -- # local node 00:02:37.494 00:36:30 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:37.494 00:36:30 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:37.494 00:36:30 -- setup/hugepages.sh@92 -- # local surp 00:02:37.494 00:36:30 -- setup/hugepages.sh@93 -- # local resv 00:02:37.494 00:36:30 -- setup/hugepages.sh@94 -- # local anon 00:02:37.494 00:36:30 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:37.494 00:36:30 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:37.494 00:36:30 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:37.494 00:36:30 -- setup/common.sh@18 -- # local node= 00:02:37.494 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.494 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.494 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.494 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.494 00:36:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.494 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.757 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.757 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.757 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242098560 kB' 'MemAvailable: 245760296 kB' 'Buffers: 2696 kB' 'Cached: 10579404 kB' 'SwapCached: 0 kB' 'Active: 6668404 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101600 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484196 kB' 'Mapped: 164284 kB' 'Shmem: 5627332 kB' 'KReclaimable: 372040 kB' 'Slab: 976348 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604308 kB' 'KernelStack: 24752 kB' 'PageTables: 8596 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618220 kB' 'Committed_AS: 7610300 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328912 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.758 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.758 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:37.758 00:36:30 -- setup/common.sh@33 -- # echo 0 00:02:37.759 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.759 00:36:30 -- setup/hugepages.sh@97 -- # anon=0 00:02:37.759 00:36:30 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:37.759 00:36:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.759 00:36:30 -- setup/common.sh@18 -- # local node= 00:02:37.759 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.759 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.759 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.759 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.759 00:36:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.759 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.759 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242099076 kB' 'MemAvailable: 245760812 kB' 'Buffers: 2696 kB' 'Cached: 10579404 kB' 'SwapCached: 0 kB' 'Active: 6668788 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101984 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484916 kB' 'Mapped: 164360 kB' 'Shmem: 5627332 kB' 'KReclaimable: 372040 kB' 'Slab: 976348 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604308 kB' 'KernelStack: 24656 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618220 kB' 'Committed_AS: 7610316 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.759 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.759 00:36:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.760 00:36:30 -- setup/common.sh@33 -- # echo 0 00:02:37.760 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.760 00:36:30 -- setup/hugepages.sh@99 -- # surp=0 00:02:37.760 00:36:30 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:37.760 00:36:30 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:37.760 00:36:30 -- setup/common.sh@18 -- # local node= 00:02:37.760 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.760 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.760 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.760 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.760 00:36:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.760 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.760 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242099644 kB' 'MemAvailable: 245761380 kB' 'Buffers: 2696 kB' 'Cached: 10579420 kB' 'SwapCached: 0 kB' 'Active: 6668820 kB' 'Inactive: 4387964 kB' 'Active(anon): 6102016 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484404 kB' 'Mapped: 164332 kB' 'Shmem: 5627348 kB' 'KReclaimable: 372040 kB' 'Slab: 976348 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604308 kB' 'KernelStack: 24672 kB' 'PageTables: 8988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618220 kB' 'Committed_AS: 7624936 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328880 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.760 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.760 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.761 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.761 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:37.761 00:36:30 -- setup/common.sh@33 -- # echo 0 00:02:37.761 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.761 00:36:30 -- setup/hugepages.sh@100 -- # resv=0 00:02:37.761 00:36:30 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:02:37.761 nr_hugepages=1025 00:02:37.761 00:36:30 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:37.761 resv_hugepages=0 00:02:37.761 00:36:30 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:37.761 surplus_hugepages=0 00:02:37.761 00:36:30 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:37.761 anon_hugepages=0 00:02:37.761 00:36:30 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.761 00:36:30 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:02:37.761 00:36:30 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:37.761 00:36:30 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:37.761 00:36:30 -- setup/common.sh@18 -- # local node= 00:02:37.761 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.761 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.761 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.761 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:37.761 00:36:30 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:37.761 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.761 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242099584 kB' 'MemAvailable: 245761320 kB' 'Buffers: 2696 kB' 'Cached: 10579432 kB' 'SwapCached: 0 kB' 'Active: 6668784 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101980 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 484236 kB' 'Mapped: 164252 kB' 'Shmem: 5627360 kB' 'KReclaimable: 372040 kB' 'Slab: 976708 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604668 kB' 'KernelStack: 24704 kB' 'PageTables: 9108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136618220 kB' 'Committed_AS: 7610712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328864 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.762 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.762 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.763 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.763 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:37.763 00:36:30 -- setup/common.sh@33 -- # echo 1025 00:02:37.763 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.763 00:36:30 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:02:37.763 00:36:30 -- setup/hugepages.sh@112 -- # get_nodes 00:02:37.763 00:36:30 -- setup/hugepages.sh@27 -- # local node 00:02:37.763 00:36:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.763 00:36:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:37.763 00:36:30 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:37.763 00:36:30 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:02:37.763 00:36:30 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:37.763 00:36:30 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:37.763 00:36:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.763 00:36:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.763 00:36:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:37.763 00:36:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.763 00:36:30 -- setup/common.sh@18 -- # local node=0 00:02:37.763 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.763 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.764 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.764 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:37.764 00:36:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:37.764 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.764 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 126448072 kB' 'MemUsed: 5368152 kB' 'SwapCached: 0 kB' 'Active: 2196092 kB' 'Inactive: 118148 kB' 'Active(anon): 1795464 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043332 kB' 'Mapped: 126740 kB' 'AnonPages: 280196 kB' 'Shmem: 1524556 kB' 'KernelStack: 12776 kB' 'PageTables: 5420 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158668 kB' 'Slab: 479332 kB' 'SReclaimable: 158668 kB' 'SUnreclaim: 320664 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.764 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.764 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@33 -- # echo 0 00:02:37.765 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.765 00:36:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.765 00:36:30 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:37.765 00:36:30 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:37.765 00:36:30 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:37.765 00:36:30 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:37.765 00:36:30 -- setup/common.sh@18 -- # local node=1 00:02:37.765 00:36:30 -- setup/common.sh@19 -- # local var val 00:02:37.765 00:36:30 -- setup/common.sh@20 -- # local mem_f mem 00:02:37.765 00:36:30 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:37.765 00:36:30 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:37.765 00:36:30 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:37.765 00:36:30 -- setup/common.sh@28 -- # mapfile -t mem 00:02:37.765 00:36:30 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742212 kB' 'MemFree: 115651572 kB' 'MemUsed: 11090640 kB' 'SwapCached: 0 kB' 'Active: 4471868 kB' 'Inactive: 4269816 kB' 'Active(anon): 4305692 kB' 'Inactive(anon): 0 kB' 'Active(file): 166176 kB' 'Inactive(file): 4269816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8538812 kB' 'Mapped: 37512 kB' 'AnonPages: 203104 kB' 'Shmem: 4102820 kB' 'KernelStack: 11800 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 213372 kB' 'Slab: 497344 kB' 'SReclaimable: 213372 kB' 'SUnreclaim: 283972 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.765 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.765 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.766 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.766 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # continue 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # IFS=': ' 00:02:37.767 00:36:30 -- setup/common.sh@31 -- # read -r var val _ 00:02:37.767 00:36:30 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:37.767 00:36:30 -- setup/common.sh@33 -- # echo 0 00:02:37.767 00:36:30 -- setup/common.sh@33 -- # return 0 00:02:37.767 00:36:30 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:37.767 00:36:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.767 00:36:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.767 00:36:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.767 00:36:30 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:02:37.767 node0=512 expecting 513 00:02:37.767 00:36:30 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:37.768 00:36:30 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:37.768 00:36:30 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:37.768 00:36:30 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:02:37.768 node1=513 expecting 512 00:02:37.768 00:36:30 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:02:37.768 00:02:37.768 real 0m2.976s 00:02:37.768 user 0m0.992s 00:02:37.768 sys 0m1.855s 00:02:37.768 00:36:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:37.768 00:36:30 -- common/autotest_common.sh@10 -- # set +x 00:02:37.768 ************************************ 00:02:37.768 END TEST odd_alloc 00:02:37.768 ************************************ 00:02:37.768 00:36:30 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:02:37.768 00:36:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:37.768 00:36:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:37.768 00:36:30 -- common/autotest_common.sh@10 -- # set +x 00:02:37.768 ************************************ 00:02:37.768 START TEST custom_alloc 00:02:37.768 ************************************ 00:02:37.768 00:36:30 -- common/autotest_common.sh@1111 -- # custom_alloc 00:02:37.768 00:36:30 -- setup/hugepages.sh@167 -- # local IFS=, 00:02:37.768 00:36:30 -- setup/hugepages.sh@169 -- # local node 00:02:37.768 00:36:30 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:02:37.768 00:36:30 -- setup/hugepages.sh@170 -- # local nodes_hp 00:02:37.768 00:36:30 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:02:37.768 00:36:30 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:02:37.768 00:36:30 -- setup/hugepages.sh@49 -- # local size=1048576 00:02:37.768 00:36:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:02:37.768 00:36:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.768 00:36:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.768 00:36:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.768 00:36:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:02:37.768 00:36:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.768 00:36:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.768 00:36:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.768 00:36:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.768 00:36:30 -- setup/hugepages.sh@83 -- # : 256 00:02:37.768 00:36:30 -- setup/hugepages.sh@84 -- # : 1 00:02:37.768 00:36:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:02:37.768 00:36:30 -- setup/hugepages.sh@83 -- # : 0 00:02:37.768 00:36:30 -- setup/hugepages.sh@84 -- # : 0 00:02:37.768 00:36:30 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:02:37.768 00:36:30 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:02:37.768 00:36:30 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:37.768 00:36:30 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:37.768 00:36:30 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:37.768 00:36:30 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:02:37.768 00:36:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.768 00:36:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.768 00:36:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.768 00:36:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.768 00:36:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.768 00:36:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.769 00:36:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.769 00:36:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.769 00:36:30 -- setup/hugepages.sh@78 -- # return 0 00:02:37.769 00:36:30 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:02:37.769 00:36:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.769 00:36:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.769 00:36:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:02:37.769 00:36:30 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:02:37.769 00:36:30 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:02:37.769 00:36:30 -- setup/hugepages.sh@62 -- # user_nodes=() 00:02:37.769 00:36:30 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:37.769 00:36:30 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:37.769 00:36:30 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:37.769 00:36:30 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:37.769 00:36:30 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:37.769 00:36:30 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:02:37.769 00:36:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.769 00:36:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:02:37.769 00:36:30 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:02:37.769 00:36:30 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:02:37.769 00:36:30 -- setup/hugepages.sh@78 -- # return 0 00:02:37.769 00:36:30 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:02:37.769 00:36:30 -- setup/hugepages.sh@187 -- # setup output 00:02:37.769 00:36:30 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:37.769 00:36:30 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:41.079 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.079 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.079 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:41.079 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:41.079 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:41.079 00:36:33 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:02:41.079 00:36:33 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:02:41.079 00:36:33 -- setup/hugepages.sh@89 -- # local node 00:02:41.079 00:36:33 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:41.079 00:36:33 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:41.079 00:36:33 -- setup/hugepages.sh@92 -- # local surp 00:02:41.079 00:36:33 -- setup/hugepages.sh@93 -- # local resv 00:02:41.079 00:36:33 -- setup/hugepages.sh@94 -- # local anon 00:02:41.079 00:36:33 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:41.079 00:36:33 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:41.079 00:36:33 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:41.079 00:36:33 -- setup/common.sh@18 -- # local node= 00:02:41.079 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.079 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.079 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.079 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.079 00:36:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.079 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.079 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 241050472 kB' 'MemAvailable: 244712208 kB' 'Buffers: 2696 kB' 'Cached: 10579544 kB' 'SwapCached: 0 kB' 'Active: 6667804 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101000 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482380 kB' 'Mapped: 164356 kB' 'Shmem: 5627472 kB' 'KReclaimable: 372040 kB' 'Slab: 976312 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604272 kB' 'KernelStack: 24688 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094956 kB' 'Committed_AS: 7611024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328992 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.079 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.079 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:41.080 00:36:33 -- setup/common.sh@33 -- # echo 0 00:02:41.080 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.080 00:36:33 -- setup/hugepages.sh@97 -- # anon=0 00:02:41.080 00:36:33 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:41.080 00:36:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.080 00:36:33 -- setup/common.sh@18 -- # local node= 00:02:41.080 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.080 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.080 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.080 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.080 00:36:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.080 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.080 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 241050248 kB' 'MemAvailable: 244711984 kB' 'Buffers: 2696 kB' 'Cached: 10579544 kB' 'SwapCached: 0 kB' 'Active: 6668672 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101868 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483188 kB' 'Mapped: 164356 kB' 'Shmem: 5627472 kB' 'KReclaimable: 372040 kB' 'Slab: 976264 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604224 kB' 'KernelStack: 24672 kB' 'PageTables: 8404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094956 kB' 'Committed_AS: 7611036 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.080 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.080 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.081 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.081 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.082 00:36:33 -- setup/common.sh@33 -- # echo 0 00:02:41.082 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.082 00:36:33 -- setup/hugepages.sh@99 -- # surp=0 00:02:41.082 00:36:33 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:41.082 00:36:33 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:41.082 00:36:33 -- setup/common.sh@18 -- # local node= 00:02:41.082 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.082 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.082 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.082 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.082 00:36:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.082 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.082 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 241051784 kB' 'MemAvailable: 244713520 kB' 'Buffers: 2696 kB' 'Cached: 10579556 kB' 'SwapCached: 0 kB' 'Active: 6667176 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100372 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482072 kB' 'Mapped: 164284 kB' 'Shmem: 5627484 kB' 'KReclaimable: 372040 kB' 'Slab: 976248 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604208 kB' 'KernelStack: 24720 kB' 'PageTables: 8248 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094956 kB' 'Committed_AS: 7611048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.082 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.082 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:41.083 00:36:33 -- setup/common.sh@33 -- # echo 0 00:02:41.083 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.083 00:36:33 -- setup/hugepages.sh@100 -- # resv=0 00:02:41.083 00:36:33 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:02:41.083 nr_hugepages=1536 00:02:41.083 00:36:33 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:41.083 resv_hugepages=0 00:02:41.083 00:36:33 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:41.083 surplus_hugepages=0 00:02:41.083 00:36:33 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:41.083 anon_hugepages=0 00:02:41.083 00:36:33 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:41.083 00:36:33 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:02:41.083 00:36:33 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:41.083 00:36:33 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:41.083 00:36:33 -- setup/common.sh@18 -- # local node= 00:02:41.083 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.083 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.083 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.083 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:41.083 00:36:33 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:41.083 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.083 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 241052080 kB' 'MemAvailable: 244713816 kB' 'Buffers: 2696 kB' 'Cached: 10579572 kB' 'SwapCached: 0 kB' 'Active: 6666608 kB' 'Inactive: 4387964 kB' 'Active(anon): 6099804 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 481488 kB' 'Mapped: 164276 kB' 'Shmem: 5627500 kB' 'KReclaimable: 372040 kB' 'Slab: 976216 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604176 kB' 'KernelStack: 24608 kB' 'PageTables: 8028 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136094956 kB' 'Committed_AS: 7611064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.083 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.083 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.084 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.084 00:36:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:41.085 00:36:33 -- setup/common.sh@33 -- # echo 1536 00:02:41.085 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.085 00:36:33 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:02:41.085 00:36:33 -- setup/hugepages.sh@112 -- # get_nodes 00:02:41.085 00:36:33 -- setup/hugepages.sh@27 -- # local node 00:02:41.085 00:36:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.085 00:36:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:02:41.085 00:36:33 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:41.085 00:36:33 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:41.085 00:36:33 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:41.085 00:36:33 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:41.085 00:36:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.085 00:36:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.085 00:36:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:41.085 00:36:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.085 00:36:33 -- setup/common.sh@18 -- # local node=0 00:02:41.085 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.085 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.085 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.085 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:41.085 00:36:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:41.085 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.085 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 126436180 kB' 'MemUsed: 5380044 kB' 'SwapCached: 0 kB' 'Active: 2193600 kB' 'Inactive: 118148 kB' 'Active(anon): 1792972 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043440 kB' 'Mapped: 126756 kB' 'AnonPages: 277352 kB' 'Shmem: 1524664 kB' 'KernelStack: 12728 kB' 'PageTables: 5056 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158668 kB' 'Slab: 478772 kB' 'SReclaimable: 158668 kB' 'SUnreclaim: 320104 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.085 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.085 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@33 -- # echo 0 00:02:41.086 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.086 00:36:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.086 00:36:33 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:41.086 00:36:33 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:41.086 00:36:33 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:02:41.086 00:36:33 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:41.086 00:36:33 -- setup/common.sh@18 -- # local node=1 00:02:41.086 00:36:33 -- setup/common.sh@19 -- # local var val 00:02:41.086 00:36:33 -- setup/common.sh@20 -- # local mem_f mem 00:02:41.086 00:36:33 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:41.086 00:36:33 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:02:41.086 00:36:33 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:02:41.086 00:36:33 -- setup/common.sh@28 -- # mapfile -t mem 00:02:41.086 00:36:33 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126742212 kB' 'MemFree: 114613728 kB' 'MemUsed: 12128484 kB' 'SwapCached: 0 kB' 'Active: 4472988 kB' 'Inactive: 4269816 kB' 'Active(anon): 4306812 kB' 'Inactive(anon): 0 kB' 'Active(file): 166176 kB' 'Inactive(file): 4269816 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8538828 kB' 'Mapped: 37520 kB' 'AnonPages: 204084 kB' 'Shmem: 4102836 kB' 'KernelStack: 11944 kB' 'PageTables: 3016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 213372 kB' 'Slab: 497444 kB' 'SReclaimable: 213372 kB' 'SUnreclaim: 284072 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.086 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.086 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # continue 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # IFS=': ' 00:02:41.087 00:36:33 -- setup/common.sh@31 -- # read -r var val _ 00:02:41.087 00:36:33 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:41.087 00:36:33 -- setup/common.sh@33 -- # echo 0 00:02:41.087 00:36:33 -- setup/common.sh@33 -- # return 0 00:02:41.087 00:36:33 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:41.087 00:36:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.087 00:36:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.087 00:36:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.087 00:36:33 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:02:41.087 node0=512 expecting 512 00:02:41.087 00:36:33 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:41.087 00:36:33 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:41.087 00:36:33 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:41.087 00:36:33 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:02:41.087 node1=1024 expecting 1024 00:02:41.087 00:36:33 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:02:41.087 00:02:41.087 real 0m3.009s 00:02:41.087 user 0m1.048s 00:02:41.087 sys 0m1.821s 00:02:41.087 00:36:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:41.087 00:36:33 -- common/autotest_common.sh@10 -- # set +x 00:02:41.087 ************************************ 00:02:41.087 END TEST custom_alloc 00:02:41.087 ************************************ 00:02:41.087 00:36:33 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:02:41.087 00:36:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:41.087 00:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:41.087 00:36:33 -- common/autotest_common.sh@10 -- # set +x 00:02:41.087 ************************************ 00:02:41.087 START TEST no_shrink_alloc 00:02:41.087 ************************************ 00:02:41.087 00:36:33 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:02:41.087 00:36:33 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:02:41.087 00:36:33 -- setup/hugepages.sh@49 -- # local size=2097152 00:02:41.087 00:36:33 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:02:41.087 00:36:33 -- setup/hugepages.sh@51 -- # shift 00:02:41.087 00:36:33 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:02:41.087 00:36:33 -- setup/hugepages.sh@52 -- # local node_ids 00:02:41.087 00:36:33 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:02:41.087 00:36:33 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:02:41.087 00:36:33 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:02:41.087 00:36:33 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:02:41.087 00:36:33 -- setup/hugepages.sh@62 -- # local user_nodes 00:02:41.087 00:36:33 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:02:41.087 00:36:33 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:02:41.087 00:36:33 -- setup/hugepages.sh@67 -- # nodes_test=() 00:02:41.087 00:36:33 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:02:41.087 00:36:33 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:02:41.087 00:36:33 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:02:41.087 00:36:33 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:02:41.087 00:36:33 -- setup/hugepages.sh@73 -- # return 0 00:02:41.087 00:36:33 -- setup/hugepages.sh@198 -- # setup output 00:02:41.087 00:36:33 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:41.087 00:36:33 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:43.691 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.691 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.691 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:43.691 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.691 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:43.691 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:43.957 00:36:36 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:02:43.957 00:36:36 -- setup/hugepages.sh@89 -- # local node 00:02:43.957 00:36:36 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:43.957 00:36:36 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:43.957 00:36:36 -- setup/hugepages.sh@92 -- # local surp 00:02:43.957 00:36:36 -- setup/hugepages.sh@93 -- # local resv 00:02:43.957 00:36:36 -- setup/hugepages.sh@94 -- # local anon 00:02:43.957 00:36:36 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:43.957 00:36:36 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:43.957 00:36:36 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:43.957 00:36:36 -- setup/common.sh@18 -- # local node= 00:02:43.957 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.957 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.957 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.957 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.957 00:36:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.957 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.957 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.957 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.957 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.957 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242097056 kB' 'MemAvailable: 245758792 kB' 'Buffers: 2696 kB' 'Cached: 10579676 kB' 'SwapCached: 0 kB' 'Active: 6668088 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101284 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482884 kB' 'Mapped: 164416 kB' 'Shmem: 5627604 kB' 'KReclaimable: 372040 kB' 'Slab: 975848 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 603808 kB' 'KernelStack: 24624 kB' 'PageTables: 7704 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7611976 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.958 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.958 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:43.958 00:36:36 -- setup/common.sh@33 -- # echo 0 00:02:43.958 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.958 00:36:36 -- setup/hugepages.sh@97 -- # anon=0 00:02:43.958 00:36:36 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:43.959 00:36:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.959 00:36:36 -- setup/common.sh@18 -- # local node= 00:02:43.959 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.959 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.959 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.959 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.959 00:36:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.959 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.959 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242097580 kB' 'MemAvailable: 245759316 kB' 'Buffers: 2696 kB' 'Cached: 10579676 kB' 'SwapCached: 0 kB' 'Active: 6668484 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101680 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483260 kB' 'Mapped: 164416 kB' 'Shmem: 5627604 kB' 'KReclaimable: 372040 kB' 'Slab: 975856 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 603816 kB' 'KernelStack: 24656 kB' 'PageTables: 8244 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7611988 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328992 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.959 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.959 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.960 00:36:36 -- setup/common.sh@33 -- # echo 0 00:02:43.960 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.960 00:36:36 -- setup/hugepages.sh@99 -- # surp=0 00:02:43.960 00:36:36 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:43.960 00:36:36 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:43.960 00:36:36 -- setup/common.sh@18 -- # local node= 00:02:43.960 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.960 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.960 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.960 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.960 00:36:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.960 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.960 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242096760 kB' 'MemAvailable: 245758496 kB' 'Buffers: 2696 kB' 'Cached: 10579676 kB' 'SwapCached: 0 kB' 'Active: 6667524 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100720 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482260 kB' 'Mapped: 164340 kB' 'Shmem: 5627604 kB' 'KReclaimable: 372040 kB' 'Slab: 975852 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 603812 kB' 'KernelStack: 24736 kB' 'PageTables: 8712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7612004 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328992 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.960 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.960 00:36:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.961 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.961 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:43.961 00:36:36 -- setup/common.sh@33 -- # echo 0 00:02:43.961 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.961 00:36:36 -- setup/hugepages.sh@100 -- # resv=0 00:02:43.961 00:36:36 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:43.961 nr_hugepages=1024 00:02:43.961 00:36:36 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:43.961 resv_hugepages=0 00:02:43.961 00:36:36 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:43.961 surplus_hugepages=0 00:02:43.961 00:36:36 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:43.961 anon_hugepages=0 00:02:43.961 00:36:36 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.961 00:36:36 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:43.961 00:36:36 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:43.961 00:36:36 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:43.961 00:36:36 -- setup/common.sh@18 -- # local node= 00:02:43.961 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.961 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.961 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.961 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:43.961 00:36:36 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:43.961 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.961 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242096152 kB' 'MemAvailable: 245757888 kB' 'Buffers: 2696 kB' 'Cached: 10579704 kB' 'SwapCached: 0 kB' 'Active: 6667652 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100848 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482376 kB' 'Mapped: 164340 kB' 'Shmem: 5627632 kB' 'KReclaimable: 372040 kB' 'Slab: 975852 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 603812 kB' 'KernelStack: 24640 kB' 'PageTables: 8156 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7610496 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328976 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.962 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.962 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:43.963 00:36:36 -- setup/common.sh@33 -- # echo 1024 00:02:43.963 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.963 00:36:36 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:43.963 00:36:36 -- setup/hugepages.sh@112 -- # get_nodes 00:02:43.963 00:36:36 -- setup/hugepages.sh@27 -- # local node 00:02:43.963 00:36:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.963 00:36:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:43.963 00:36:36 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:43.963 00:36:36 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:43.963 00:36:36 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:43.963 00:36:36 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:43.963 00:36:36 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:43.963 00:36:36 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:43.963 00:36:36 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:43.963 00:36:36 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:43.963 00:36:36 -- setup/common.sh@18 -- # local node=0 00:02:43.963 00:36:36 -- setup/common.sh@19 -- # local var val 00:02:43.963 00:36:36 -- setup/common.sh@20 -- # local mem_f mem 00:02:43.963 00:36:36 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:43.963 00:36:36 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:43.963 00:36:36 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:43.963 00:36:36 -- setup/common.sh@28 -- # mapfile -t mem 00:02:43.963 00:36:36 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 125394788 kB' 'MemUsed: 6421436 kB' 'SwapCached: 0 kB' 'Active: 2193588 kB' 'Inactive: 118148 kB' 'Active(anon): 1792960 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043492 kB' 'Mapped: 126812 kB' 'AnonPages: 277268 kB' 'Shmem: 1524716 kB' 'KernelStack: 12920 kB' 'PageTables: 5448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158668 kB' 'Slab: 478308 kB' 'SReclaimable: 158668 kB' 'SUnreclaim: 319640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.963 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.963 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # continue 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # IFS=': ' 00:02:43.964 00:36:36 -- setup/common.sh@31 -- # read -r var val _ 00:02:43.964 00:36:36 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:43.964 00:36:36 -- setup/common.sh@33 -- # echo 0 00:02:43.964 00:36:36 -- setup/common.sh@33 -- # return 0 00:02:43.964 00:36:36 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:43.964 00:36:36 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:43.964 00:36:36 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:43.964 00:36:36 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:43.964 00:36:36 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:43.964 node0=1024 expecting 1024 00:02:43.964 00:36:36 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:43.964 00:36:36 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:02:43.964 00:36:36 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:02:43.964 00:36:36 -- setup/hugepages.sh@202 -- # setup output 00:02:43.964 00:36:36 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:43.964 00:36:36 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:02:46.513 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.513 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.513 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:02:46.513 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:02:46.513 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:02:46.513 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:02:46.513 00:36:39 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:02:46.513 00:36:39 -- setup/hugepages.sh@89 -- # local node 00:02:46.513 00:36:39 -- setup/hugepages.sh@90 -- # local sorted_t 00:02:46.513 00:36:39 -- setup/hugepages.sh@91 -- # local sorted_s 00:02:46.513 00:36:39 -- setup/hugepages.sh@92 -- # local surp 00:02:46.513 00:36:39 -- setup/hugepages.sh@93 -- # local resv 00:02:46.513 00:36:39 -- setup/hugepages.sh@94 -- # local anon 00:02:46.513 00:36:39 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:02:46.513 00:36:39 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:02:46.513 00:36:39 -- setup/common.sh@17 -- # local get=AnonHugePages 00:02:46.513 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:46.513 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:46.513 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.513 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.513 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.513 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.513 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.513 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242062008 kB' 'MemAvailable: 245723744 kB' 'Buffers: 2696 kB' 'Cached: 10579776 kB' 'SwapCached: 0 kB' 'Active: 6668224 kB' 'Inactive: 4387964 kB' 'Active(anon): 6101420 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 483052 kB' 'Mapped: 164784 kB' 'Shmem: 5627704 kB' 'KReclaimable: 372040 kB' 'Slab: 976932 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604892 kB' 'KernelStack: 24480 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609796 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328928 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.513 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.513 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:02:46.514 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:46.514 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:46.514 00:36:39 -- setup/hugepages.sh@97 -- # anon=0 00:02:46.514 00:36:39 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:02:46.514 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.514 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:46.514 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:46.514 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.514 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.514 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.514 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.514 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.514 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242065148 kB' 'MemAvailable: 245726884 kB' 'Buffers: 2696 kB' 'Cached: 10579776 kB' 'SwapCached: 0 kB' 'Active: 6667584 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100780 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482280 kB' 'Mapped: 164352 kB' 'Shmem: 5627704 kB' 'KReclaimable: 372040 kB' 'Slab: 976880 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604840 kB' 'KernelStack: 24464 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609808 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328848 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.514 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.514 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.515 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.515 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.516 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:46.516 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:46.516 00:36:39 -- setup/hugepages.sh@99 -- # surp=0 00:02:46.516 00:36:39 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:02:46.516 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:02:46.516 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:46.516 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:46.516 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.516 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.516 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.516 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.516 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.516 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242065432 kB' 'MemAvailable: 245727168 kB' 'Buffers: 2696 kB' 'Cached: 10579788 kB' 'SwapCached: 0 kB' 'Active: 6667556 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100752 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482284 kB' 'Mapped: 164352 kB' 'Shmem: 5627716 kB' 'KReclaimable: 372040 kB' 'Slab: 976888 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604848 kB' 'KernelStack: 24512 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328848 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.516 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.516 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:02:46.517 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:46.517 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:46.517 00:36:39 -- setup/hugepages.sh@100 -- # resv=0 00:02:46.517 00:36:39 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:02:46.517 nr_hugepages=1024 00:02:46.517 00:36:39 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:02:46.517 resv_hugepages=0 00:02:46.517 00:36:39 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:02:46.517 surplus_hugepages=0 00:02:46.517 00:36:39 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:02:46.517 anon_hugepages=0 00:02:46.517 00:36:39 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.517 00:36:39 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:02:46.517 00:36:39 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:02:46.517 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Total 00:02:46.517 00:36:39 -- setup/common.sh@18 -- # local node= 00:02:46.517 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:46.517 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.517 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.517 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:02:46.517 00:36:39 -- setup/common.sh@25 -- # [[ -n '' ]] 00:02:46.517 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.517 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 258558436 kB' 'MemFree: 242065432 kB' 'MemAvailable: 245727168 kB' 'Buffers: 2696 kB' 'Cached: 10579804 kB' 'SwapCached: 0 kB' 'Active: 6667580 kB' 'Inactive: 4387964 kB' 'Active(anon): 6100776 kB' 'Inactive(anon): 0 kB' 'Active(file): 566804 kB' 'Inactive(file): 4387964 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 482288 kB' 'Mapped: 164352 kB' 'Shmem: 5627732 kB' 'KReclaimable: 372040 kB' 'Slab: 976888 kB' 'SReclaimable: 372040 kB' 'SUnreclaim: 604848 kB' 'KernelStack: 24528 kB' 'PageTables: 7888 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 136619244 kB' 'Committed_AS: 7609836 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 328880 kB' 'VmallocChunk: 0 kB' 'Percpu: 83968 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 2934848 kB' 'DirectMap2M: 15716352 kB' 'DirectMap1G: 251658240 kB' 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.517 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.517 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.518 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.518 00:36:39 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:02:46.519 00:36:39 -- setup/common.sh@33 -- # echo 1024 00:02:46.519 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:46.519 00:36:39 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:02:46.519 00:36:39 -- setup/hugepages.sh@112 -- # get_nodes 00:02:46.519 00:36:39 -- setup/hugepages.sh@27 -- # local node 00:02:46.519 00:36:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.519 00:36:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:02:46.519 00:36:39 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:02:46.519 00:36:39 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:02:46.519 00:36:39 -- setup/hugepages.sh@32 -- # no_nodes=2 00:02:46.519 00:36:39 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:02:46.519 00:36:39 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:02:46.519 00:36:39 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:02:46.519 00:36:39 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:02:46.519 00:36:39 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:02:46.519 00:36:39 -- setup/common.sh@18 -- # local node=0 00:02:46.519 00:36:39 -- setup/common.sh@19 -- # local var val 00:02:46.519 00:36:39 -- setup/common.sh@20 -- # local mem_f mem 00:02:46.519 00:36:39 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:02:46.519 00:36:39 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:02:46.519 00:36:39 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:02:46.519 00:36:39 -- setup/common.sh@28 -- # mapfile -t mem 00:02:46.519 00:36:39 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 131816224 kB' 'MemFree: 125378004 kB' 'MemUsed: 6438220 kB' 'SwapCached: 0 kB' 'Active: 2193556 kB' 'Inactive: 118148 kB' 'Active(anon): 1792928 kB' 'Inactive(anon): 0 kB' 'Active(file): 400628 kB' 'Inactive(file): 118148 kB' 'Unevictable: 9000 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2043496 kB' 'Mapped: 126824 kB' 'AnonPages: 277320 kB' 'Shmem: 1524720 kB' 'KernelStack: 12824 kB' 'PageTables: 5300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 158668 kB' 'Slab: 478700 kB' 'SReclaimable: 158668 kB' 'SUnreclaim: 320032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.519 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.519 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # continue 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # IFS=': ' 00:02:46.520 00:36:39 -- setup/common.sh@31 -- # read -r var val _ 00:02:46.520 00:36:39 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:02:46.520 00:36:39 -- setup/common.sh@33 -- # echo 0 00:02:46.520 00:36:39 -- setup/common.sh@33 -- # return 0 00:02:46.520 00:36:39 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:02:46.520 00:36:39 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:02:46.520 00:36:39 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:02:46.520 00:36:39 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:02:46.520 00:36:39 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:02:46.520 node0=1024 expecting 1024 00:02:46.520 00:36:39 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:02:46.520 00:02:46.520 real 0m5.614s 00:02:46.520 user 0m1.880s 00:02:46.520 sys 0m3.436s 00:02:46.520 00:36:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:46.520 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:02:46.520 ************************************ 00:02:46.520 END TEST no_shrink_alloc 00:02:46.520 ************************************ 00:02:46.782 00:36:39 -- setup/hugepages.sh@217 -- # clear_hp 00:02:46.782 00:36:39 -- setup/hugepages.sh@37 -- # local node hp 00:02:46.782 00:36:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.782 00:36:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.782 00:36:39 -- setup/hugepages.sh@41 -- # echo 0 00:02:46.782 00:36:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.782 00:36:39 -- setup/hugepages.sh@41 -- # echo 0 00:02:46.782 00:36:39 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:02:46.782 00:36:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.782 00:36:39 -- setup/hugepages.sh@41 -- # echo 0 00:02:46.782 00:36:39 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:02:46.782 00:36:39 -- setup/hugepages.sh@41 -- # echo 0 00:02:46.782 00:36:39 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:02:46.782 00:36:39 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:02:46.782 00:02:46.782 real 0m23.817s 00:02:46.782 user 0m7.256s 00:02:46.782 sys 0m13.410s 00:02:46.782 00:36:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:02:46.782 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:02:46.782 ************************************ 00:02:46.782 END TEST hugepages 00:02:46.782 ************************************ 00:02:46.782 00:36:39 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:46.782 00:36:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:46.782 00:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:46.782 00:36:39 -- common/autotest_common.sh@10 -- # set +x 00:02:46.782 ************************************ 00:02:46.782 START TEST driver 00:02:46.782 ************************************ 00:02:46.782 00:36:39 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/driver.sh 00:02:46.782 * Looking for test storage... 00:02:46.782 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:02:46.782 00:36:39 -- setup/driver.sh@68 -- # setup reset 00:02:46.782 00:36:39 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:46.782 00:36:39 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:02:50.985 00:36:43 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:02:50.985 00:36:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:02:50.985 00:36:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:02:50.985 00:36:43 -- common/autotest_common.sh@10 -- # set +x 00:02:51.245 ************************************ 00:02:51.245 START TEST guess_driver 00:02:51.245 ************************************ 00:02:51.245 00:36:43 -- common/autotest_common.sh@1111 -- # guess_driver 00:02:51.245 00:36:43 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:02:51.245 00:36:43 -- setup/driver.sh@47 -- # local fail=0 00:02:51.245 00:36:43 -- setup/driver.sh@49 -- # pick_driver 00:02:51.245 00:36:43 -- setup/driver.sh@36 -- # vfio 00:02:51.245 00:36:43 -- setup/driver.sh@21 -- # local iommu_grups 00:02:51.245 00:36:43 -- setup/driver.sh@22 -- # local unsafe_vfio 00:02:51.245 00:36:43 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:02:51.245 00:36:43 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:02:51.245 00:36:43 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:02:51.245 00:36:43 -- setup/driver.sh@29 -- # (( 335 > 0 )) 00:02:51.245 00:36:43 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:02:51.245 00:36:43 -- setup/driver.sh@14 -- # mod vfio_pci 00:02:51.245 00:36:43 -- setup/driver.sh@12 -- # dep vfio_pci 00:02:51.245 00:36:43 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:02:51.246 00:36:43 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:02:51.246 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:02:51.246 00:36:43 -- setup/driver.sh@30 -- # return 0 00:02:51.246 00:36:43 -- setup/driver.sh@37 -- # echo vfio-pci 00:02:51.246 00:36:43 -- setup/driver.sh@49 -- # driver=vfio-pci 00:02:51.246 00:36:43 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:02:51.246 00:36:43 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:02:51.246 Looking for driver=vfio-pci 00:02:51.246 00:36:43 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:51.246 00:36:43 -- setup/driver.sh@45 -- # setup output config 00:02:51.246 00:36:43 -- setup/common.sh@9 -- # [[ output == output ]] 00:02:51.246 00:36:43 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:46 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:46 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:46 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:54.572 00:36:47 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:54.572 00:36:47 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:54.572 00:36:47 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.480 00:36:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:56.480 00:36:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:56.480 00:36:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.480 00:36:48 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:56.480 00:36:48 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:56.480 00:36:48 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.480 00:36:49 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:02:56.480 00:36:49 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:02:56.480 00:36:49 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:02:56.480 00:36:49 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:02:56.480 00:36:49 -- setup/driver.sh@65 -- # setup reset 00:02:56.480 00:36:49 -- setup/common.sh@9 -- # [[ reset == output ]] 00:02:56.480 00:36:49 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:01.767 00:03:01.767 real 0m9.678s 00:03:01.767 user 0m2.162s 00:03:01.767 sys 0m4.176s 00:03:01.767 00:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.767 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:03:01.767 ************************************ 00:03:01.767 END TEST guess_driver 00:03:01.767 ************************************ 00:03:01.767 00:03:01.767 real 0m14.115s 00:03:01.767 user 0m3.278s 00:03:01.767 sys 0m6.443s 00:03:01.767 00:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:01.767 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:03:01.767 ************************************ 00:03:01.767 END TEST driver 00:03:01.767 ************************************ 00:03:01.767 00:36:53 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:01.767 00:36:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:01.767 00:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:01.767 00:36:53 -- common/autotest_common.sh@10 -- # set +x 00:03:01.767 ************************************ 00:03:01.767 START TEST devices 00:03:01.767 ************************************ 00:03:01.767 00:36:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/devices.sh 00:03:01.767 * Looking for test storage... 00:03:01.767 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup 00:03:01.767 00:36:53 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:01.767 00:36:53 -- setup/devices.sh@192 -- # setup reset 00:03:01.767 00:36:53 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.767 00:36:53 -- setup/common.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.312 00:36:56 -- setup/devices.sh@194 -- # get_zoned_devs 00:03:04.312 00:36:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:03:04.312 00:36:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:03:04.312 00:36:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:03:04.312 00:36:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.312 00:36:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:04.312 00:36:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.312 00:36:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.312 00:36:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:04.312 00:36:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.312 00:36:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:03:04.312 00:36:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme2n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:03:04.312 00:36:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:04.312 00:36:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:03:04.312 00:36:56 -- setup/devices.sh@196 -- # blocks=() 00:03:04.313 00:36:56 -- setup/devices.sh@196 -- # declare -a blocks 00:03:04.313 00:36:56 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:04.313 00:36:56 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:04.313 00:36:56 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:04.313 00:36:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:04.313 00:36:56 -- setup/devices.sh@202 -- # pci=0000:c9:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\9\:\0\0\.\0* ]] 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:04.313 00:36:56 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:04.313 00:36:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:04.313 No valid GPT data, bailing 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # pt= 00:03:04.313 00:36:56 -- scripts/common.sh@392 -- # return 1 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:04.313 00:36:56 -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:04.313 00:36:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:04.313 00:36:56 -- setup/common.sh@80 -- # echo 2000398934016 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:04.313 00:36:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:04.313 00:36:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:c9:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme1 00:03:04.313 00:36:56 -- setup/devices.sh@202 -- # pci=0000:cb:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\b\:\0\0\.\0* ]] 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:03:04.313 00:36:56 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:03:04.313 00:36:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:03:04.313 No valid GPT data, bailing 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # pt= 00:03:04.313 00:36:56 -- scripts/common.sh@392 -- # return 1 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:03:04.313 00:36:56 -- setup/common.sh@76 -- # local dev=nvme1n1 00:03:04.313 00:36:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:03:04.313 00:36:56 -- setup/common.sh@80 -- # echo 2000398934016 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:04.313 00:36:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:04.313 00:36:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:cb:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme2n1 00:03:04.313 00:36:56 -- setup/devices.sh@201 -- # ctrl=nvme2 00:03:04.313 00:36:56 -- setup/devices.sh@202 -- # pci=0000:ca:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\c\a\:\0\0\.\0* ]] 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # block_in_use nvme2n1 00:03:04.313 00:36:56 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:03:04.313 00:36:56 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:03:04.313 No valid GPT data, bailing 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:04.313 00:36:56 -- scripts/common.sh@391 -- # pt= 00:03:04.313 00:36:56 -- scripts/common.sh@392 -- # return 1 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme2n1 00:03:04.313 00:36:56 -- setup/common.sh@76 -- # local dev=nvme2n1 00:03:04.313 00:36:56 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme2n1 ]] 00:03:04.313 00:36:56 -- setup/common.sh@80 -- # echo 2000398934016 00:03:04.313 00:36:56 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:03:04.313 00:36:56 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:04.313 00:36:56 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:ca:00.0 00:03:04.313 00:36:56 -- setup/devices.sh@209 -- # (( 3 > 0 )) 00:03:04.313 00:36:56 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:04.313 00:36:56 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:04.313 00:36:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:04.313 00:36:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:04.313 00:36:56 -- common/autotest_common.sh@10 -- # set +x 00:03:04.313 ************************************ 00:03:04.313 START TEST nvme_mount 00:03:04.313 ************************************ 00:03:04.313 00:36:56 -- common/autotest_common.sh@1111 -- # nvme_mount 00:03:04.313 00:36:56 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:04.313 00:36:56 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:04.313 00:36:56 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:04.313 00:36:56 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:04.313 00:36:56 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:04.313 00:36:56 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:04.313 00:36:56 -- setup/common.sh@40 -- # local part_no=1 00:03:04.313 00:36:56 -- setup/common.sh@41 -- # local size=1073741824 00:03:04.313 00:36:56 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:04.313 00:36:56 -- setup/common.sh@44 -- # parts=() 00:03:04.313 00:36:56 -- setup/common.sh@44 -- # local parts 00:03:04.313 00:36:56 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:04.313 00:36:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.313 00:36:56 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:04.313 00:36:56 -- setup/common.sh@46 -- # (( part++ )) 00:03:04.313 00:36:56 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:04.313 00:36:56 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:04.313 00:36:56 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:04.313 00:36:56 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:05.255 Creating new GPT entries in memory. 00:03:05.255 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:05.255 other utilities. 00:03:05.255 00:36:57 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:05.255 00:36:57 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:05.255 00:36:57 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:05.255 00:36:57 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:05.255 00:36:57 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:06.639 Creating new GPT entries in memory. 00:03:06.639 The operation has completed successfully. 00:03:06.639 00:36:58 -- setup/common.sh@57 -- # (( part++ )) 00:03:06.639 00:36:58 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:06.639 00:36:58 -- setup/common.sh@62 -- # wait 2519484 00:03:06.639 00:36:58 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.639 00:36:58 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:06.639 00:36:58 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.639 00:36:58 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:06.639 00:36:58 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:06.639 00:36:59 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.639 00:36:59 -- setup/devices.sh@105 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.639 00:36:59 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:06.639 00:36:59 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:06.639 00:36:59 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:06.639 00:36:59 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:06.639 00:36:59 -- setup/devices.sh@53 -- # local found=0 00:03:06.639 00:36:59 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:06.639 00:36:59 -- setup/devices.sh@56 -- # : 00:03:06.639 00:36:59 -- setup/devices.sh@59 -- # local pci status 00:03:06.639 00:36:59 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:06.639 00:36:59 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:06.639 00:36:59 -- setup/devices.sh@47 -- # setup output config 00:03:06.639 00:36:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:06.639 00:36:59 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:09.182 00:37:01 -- setup/devices.sh@63 -- # found=1 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.182 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.182 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.183 00:37:01 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:09.183 00:37:01 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:09.183 00:37:01 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.183 00:37:01 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.183 00:37:01 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.183 00:37:01 -- setup/devices.sh@110 -- # cleanup_nvme 00:03:09.183 00:37:01 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.183 00:37:01 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.183 00:37:01 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:09.183 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:09.183 00:37:01 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:09.183 00:37:01 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:09.443 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:09.443 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:09.443 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:09.443 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:09.443 00:37:02 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:09.443 00:37:02 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:09.443 00:37:02 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.443 00:37:02 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:09.443 00:37:02 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:09.443 00:37:02 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.443 00:37:02 -- setup/devices.sh@116 -- # verify 0000:c9:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.443 00:37:02 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:09.443 00:37:02 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:09.443 00:37:02 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:09.443 00:37:02 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:09.443 00:37:02 -- setup/devices.sh@53 -- # local found=0 00:03:09.443 00:37:02 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:09.443 00:37:02 -- setup/devices.sh@56 -- # : 00:03:09.443 00:37:02 -- setup/devices.sh@59 -- # local pci status 00:03:09.443 00:37:02 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:09.443 00:37:02 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:09.443 00:37:02 -- setup/devices.sh@47 -- # setup output config 00:03:09.443 00:37:02 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:09.443 00:37:02 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:11.979 00:37:04 -- setup/devices.sh@63 -- # found=1 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.979 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.979 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.980 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.980 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:11.980 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:11.980 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.239 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:12.239 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.239 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:12.239 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.239 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:12.239 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.239 00:37:04 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:12.239 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.239 00:37:04 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:12.239 00:37:04 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:12.239 00:37:04 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.239 00:37:04 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:12.239 00:37:04 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:12.239 00:37:04 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:12.498 00:37:04 -- setup/devices.sh@125 -- # verify 0000:c9:00.0 data@nvme0n1 '' '' 00:03:12.498 00:37:04 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:12.498 00:37:04 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:12.498 00:37:04 -- setup/devices.sh@50 -- # local mount_point= 00:03:12.498 00:37:04 -- setup/devices.sh@51 -- # local test_file= 00:03:12.498 00:37:04 -- setup/devices.sh@53 -- # local found=0 00:03:12.498 00:37:04 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:12.498 00:37:04 -- setup/devices.sh@59 -- # local pci status 00:03:12.498 00:37:04 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:12.498 00:37:04 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:12.498 00:37:04 -- setup/devices.sh@47 -- # setup output config 00:03:12.498 00:37:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.498 00:37:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:15.034 00:37:07 -- setup/devices.sh@63 -- # found=1 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.034 00:37:07 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:15.034 00:37:07 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:15.348 00:37:07 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:15.348 00:37:07 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:15.348 00:37:07 -- setup/devices.sh@68 -- # return 0 00:03:15.348 00:37:07 -- setup/devices.sh@128 -- # cleanup_nvme 00:03:15.348 00:37:07 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:15.348 00:37:07 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:15.348 00:37:07 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:15.348 00:37:07 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:15.348 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:15.348 00:03:15.348 real 0m10.904s 00:03:15.348 user 0m2.715s 00:03:15.348 sys 0m5.290s 00:03:15.348 00:37:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:15.348 00:37:07 -- common/autotest_common.sh@10 -- # set +x 00:03:15.348 ************************************ 00:03:15.348 END TEST nvme_mount 00:03:15.348 ************************************ 00:03:15.348 00:37:07 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:15.348 00:37:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:15.348 00:37:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:15.348 00:37:07 -- common/autotest_common.sh@10 -- # set +x 00:03:15.348 ************************************ 00:03:15.348 START TEST dm_mount 00:03:15.348 ************************************ 00:03:15.348 00:37:07 -- common/autotest_common.sh@1111 -- # dm_mount 00:03:15.348 00:37:07 -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:15.348 00:37:07 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:15.348 00:37:07 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:15.348 00:37:07 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:15.348 00:37:07 -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:15.348 00:37:07 -- setup/common.sh@40 -- # local part_no=2 00:03:15.348 00:37:07 -- setup/common.sh@41 -- # local size=1073741824 00:03:15.348 00:37:07 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:15.348 00:37:07 -- setup/common.sh@44 -- # parts=() 00:03:15.348 00:37:07 -- setup/common.sh@44 -- # local parts 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part = 1 )) 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:15.348 00:37:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part++ )) 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:15.348 00:37:07 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part++ )) 00:03:15.348 00:37:07 -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:15.348 00:37:07 -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:15.348 00:37:07 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:15.348 00:37:07 -- setup/common.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:16.304 Creating new GPT entries in memory. 00:03:16.304 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:16.304 other utilities. 00:03:16.304 00:37:08 -- setup/common.sh@57 -- # (( part = 1 )) 00:03:16.304 00:37:08 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:16.304 00:37:08 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:16.304 00:37:08 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:16.304 00:37:08 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:17.689 Creating new GPT entries in memory. 00:03:17.690 The operation has completed successfully. 00:03:17.690 00:37:09 -- setup/common.sh@57 -- # (( part++ )) 00:03:17.690 00:37:09 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:17.690 00:37:09 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:17.690 00:37:09 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:17.690 00:37:09 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:18.634 The operation has completed successfully. 00:03:18.634 00:37:10 -- setup/common.sh@57 -- # (( part++ )) 00:03:18.634 00:37:10 -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:18.634 00:37:10 -- setup/common.sh@62 -- # wait 2524624 00:03:18.634 00:37:11 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:18.634 00:37:11 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:18.634 00:37:11 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:18.634 00:37:11 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:18.634 00:37:11 -- setup/devices.sh@160 -- # for t in {1..5} 00:03:18.634 00:37:11 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:18.634 00:37:11 -- setup/devices.sh@161 -- # break 00:03:18.634 00:37:11 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:18.634 00:37:11 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:18.634 00:37:11 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:18.634 00:37:11 -- setup/devices.sh@166 -- # dm=dm-0 00:03:18.634 00:37:11 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:18.634 00:37:11 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:18.634 00:37:11 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:18.634 00:37:11 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount size= 00:03:18.634 00:37:11 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:18.634 00:37:11 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:18.634 00:37:11 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:18.634 00:37:11 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:18.635 00:37:11 -- setup/devices.sh@174 -- # verify 0000:c9:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:18.635 00:37:11 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:18.635 00:37:11 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:18.635 00:37:11 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:18.635 00:37:11 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:18.635 00:37:11 -- setup/devices.sh@53 -- # local found=0 00:03:18.635 00:37:11 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:18.635 00:37:11 -- setup/devices.sh@56 -- # : 00:03:18.635 00:37:11 -- setup/devices.sh@59 -- # local pci status 00:03:18.635 00:37:11 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:18.635 00:37:11 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:18.635 00:37:11 -- setup/devices.sh@47 -- # setup output config 00:03:18.635 00:37:11 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.635 00:37:11 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:21.173 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.173 00:37:13 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:21.173 00:37:13 -- setup/devices.sh@63 -- # found=1 00:03:21.173 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.173 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.173 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.173 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.173 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.173 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.173 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:13 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:13 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:14 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:14 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:14 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.434 00:37:14 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:21.434 00:37:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.694 00:37:14 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:21.694 00:37:14 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:21.694 00:37:14 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:21.694 00:37:14 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:21.694 00:37:14 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:21.694 00:37:14 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:21.694 00:37:14 -- setup/devices.sh@184 -- # verify 0000:c9:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:21.694 00:37:14 -- setup/devices.sh@48 -- # local dev=0000:c9:00.0 00:03:21.694 00:37:14 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:21.694 00:37:14 -- setup/devices.sh@50 -- # local mount_point= 00:03:21.694 00:37:14 -- setup/devices.sh@51 -- # local test_file= 00:03:21.694 00:37:14 -- setup/devices.sh@53 -- # local found=0 00:03:21.694 00:37:14 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:21.694 00:37:14 -- setup/devices.sh@59 -- # local pci status 00:03:21.694 00:37:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:21.694 00:37:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:c9:00.0 00:03:21.694 00:37:14 -- setup/devices.sh@47 -- # setup output config 00:03:21.694 00:37:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.694 00:37:14 -- setup/common.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh config 00:03:24.236 00:37:16 -- setup/devices.sh@62 -- # [[ 0000:c9:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.236 00:37:16 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:24.236 00:37:16 -- setup/devices.sh@63 -- # found=1 00:03:24.236 00:37:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.236 00:37:16 -- setup/devices.sh@62 -- # [[ 0000:74:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.236 00:37:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.236 00:37:16 -- setup/devices.sh@62 -- # [[ 0000:f1:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.236 00:37:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.236 00:37:16 -- setup/devices.sh@62 -- # [[ 0000:cb:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.236 00:37:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:79:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:6f:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:6f:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:f6:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:f6:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:74:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:6a:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:79:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:ec:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:6a:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:ca:00.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:ec:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:e7:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:e7:02.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.496 00:37:17 -- setup/devices.sh@62 -- # [[ 0000:f1:01.0 == \0\0\0\0\:\c\9\:\0\0\.\0 ]] 00:03:24.496 00:37:17 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:24.757 00:37:17 -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:24.757 00:37:17 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:24.757 00:37:17 -- setup/devices.sh@68 -- # return 0 00:03:24.757 00:37:17 -- setup/devices.sh@187 -- # cleanup_dm 00:03:24.757 00:37:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:24.757 00:37:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:24.757 00:37:17 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:24.757 00:37:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.757 00:37:17 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:24.757 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:24.757 00:37:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:24.757 00:37:17 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:24.757 00:03:24.757 real 0m9.445s 00:03:24.757 user 0m2.076s 00:03:24.757 sys 0m3.997s 00:03:24.757 00:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:24.757 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:03:24.757 ************************************ 00:03:24.757 END TEST dm_mount 00:03:24.757 ************************************ 00:03:24.757 00:37:17 -- setup/devices.sh@1 -- # cleanup 00:03:24.757 00:37:17 -- setup/devices.sh@11 -- # cleanup_nvme 00:03:24.757 00:37:17 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/nvme_mount 00:03:24.757 00:37:17 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:24.757 00:37:17 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:24.757 00:37:17 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:24.757 00:37:17 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:25.017 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:25.017 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:03:25.017 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:25.017 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:25.017 00:37:17 -- setup/devices.sh@12 -- # cleanup_dm 00:03:25.017 00:37:17 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/dsa-phy-autotest/spdk/test/setup/dm_mount 00:03:25.017 00:37:17 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:25.017 00:37:17 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:25.017 00:37:17 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:25.017 00:37:17 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:25.017 00:37:17 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:25.017 00:03:25.017 real 0m24.147s 00:03:25.017 user 0m6.005s 00:03:25.017 sys 0m11.431s 00:03:25.017 00:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.277 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:03:25.277 ************************************ 00:03:25.277 END TEST devices 00:03:25.277 ************************************ 00:03:25.277 00:03:25.277 real 1m27.423s 00:03:25.277 user 0m23.179s 00:03:25.277 sys 0m44.111s 00:03:25.277 00:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:25.277 00:37:17 -- common/autotest_common.sh@10 -- # set +x 00:03:25.277 ************************************ 00:03:25.277 END TEST setup.sh 00:03:25.277 ************************************ 00:03:25.277 00:37:17 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh status 00:03:28.571 Hugepages 00:03:28.571 node hugesize free / total 00:03:28.571 node0 1048576kB 0 / 0 00:03:28.571 node0 2048kB 2048 / 2048 00:03:28.571 node1 1048576kB 0 / 0 00:03:28.571 node1 2048kB 0 / 0 00:03:28.571 00:03:28.571 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:28.571 DSA 0000:6a:01.0 8086 0b25 0 idxd - - 00:03:28.571 IAA 0000:6a:02.0 8086 0cfe 0 idxd - - 00:03:28.571 DSA 0000:6f:01.0 8086 0b25 0 idxd - - 00:03:28.571 IAA 0000:6f:02.0 8086 0cfe 0 idxd - - 00:03:28.571 DSA 0000:74:01.0 8086 0b25 0 idxd - - 00:03:28.571 IAA 0000:74:02.0 8086 0cfe 0 idxd - - 00:03:28.571 DSA 0000:79:01.0 8086 0b25 0 idxd - - 00:03:28.571 IAA 0000:79:02.0 8086 0cfe 0 idxd - - 00:03:28.571 NVMe 0000:c9:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:28.571 NVMe 0000:ca:00.0 8086 0a54 1 nvme nvme2 nvme2n1 00:03:28.571 NVMe 0000:cb:00.0 8086 0a54 1 nvme nvme1 nvme1n1 00:03:28.571 DSA 0000:e7:01.0 8086 0b25 1 idxd - - 00:03:28.571 IAA 0000:e7:02.0 8086 0cfe 1 idxd - - 00:03:28.571 DSA 0000:ec:01.0 8086 0b25 1 idxd - - 00:03:28.571 IAA 0000:ec:02.0 8086 0cfe 1 idxd - - 00:03:28.571 DSA 0000:f1:01.0 8086 0b25 1 idxd - - 00:03:28.571 IAA 0000:f1:02.0 8086 0cfe 1 idxd - - 00:03:28.571 DSA 0000:f6:01.0 8086 0b25 1 idxd - - 00:03:28.571 IAA 0000:f6:02.0 8086 0cfe 1 idxd - - 00:03:28.571 00:37:20 -- spdk/autotest.sh@130 -- # uname -s 00:03:28.571 00:37:20 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:28.571 00:37:20 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:28.571 00:37:20 -- common/autotest_common.sh@1517 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:31.863 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:31.863 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:31.863 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:33.250 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.250 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.509 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:33.769 00:37:26 -- common/autotest_common.sh@1518 -- # sleep 1 00:03:34.713 00:37:27 -- common/autotest_common.sh@1519 -- # bdfs=() 00:03:34.713 00:37:27 -- common/autotest_common.sh@1519 -- # local bdfs 00:03:34.713 00:37:27 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:34.713 00:37:27 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:34.713 00:37:27 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:34.713 00:37:27 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:34.713 00:37:27 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:34.713 00:37:27 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:34.713 00:37:27 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:34.713 00:37:27 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:03:34.713 00:37:27 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:03:34.713 00:37:27 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:03:38.004 Waiting for block devices as requested 00:03:38.004 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:03:38.004 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.004 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.004 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:03:38.004 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.264 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.264 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.264 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.264 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.525 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.525 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.525 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.525 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.809 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:03:38.809 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:03:38.809 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:38.809 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:03:39.070 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:03:39.070 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:03:39.331 00:37:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:c9:00.0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # grep 0000:c9:00.0/nvme/nvme 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:03.0/0000:c9:00.0/nvme/nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.331 00:37:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1543 -- # continue 00:03:39.331 00:37:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:ca:00.0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # grep 0000:ca:00.0/nvme/nvme 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:05.0/0000:ca:00.0/nvme/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.331 00:37:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1543 -- # continue 00:03:39.331 00:37:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:cb:00.0 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # grep 0000:cb:00.0/nvme/nvme 00:03:39.331 00:37:31 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:c7/0000:c7:07.0/0000:cb:00.0/nvme/nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:39.331 00:37:31 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:39.331 00:37:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:39.331 00:37:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:39.331 00:37:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:39.331 00:37:31 -- common/autotest_common.sh@1543 -- # continue 00:03:39.331 00:37:31 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:39.331 00:37:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:39.331 00:37:31 -- common/autotest_common.sh@10 -- # set +x 00:03:39.331 00:37:31 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:39.331 00:37:31 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:39.331 00:37:31 -- common/autotest_common.sh@10 -- # set +x 00:03:39.331 00:37:31 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:03:42.703 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.703 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.703 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.703 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.703 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.703 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.703 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.985 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.985 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:03:42.985 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:03:42.985 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:03:44.366 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.366 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.937 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:03:44.937 00:37:37 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:44.937 00:37:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:03:44.937 00:37:37 -- common/autotest_common.sh@10 -- # set +x 00:03:44.937 00:37:37 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:44.937 00:37:37 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:03:44.937 00:37:37 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:03:44.937 00:37:37 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:44.937 00:37:37 -- common/autotest_common.sh@1563 -- # local bdfs 00:03:44.937 00:37:37 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:03:44.937 00:37:37 -- common/autotest_common.sh@1499 -- # bdfs=() 00:03:44.937 00:37:37 -- common/autotest_common.sh@1499 -- # local bdfs 00:03:44.937 00:37:37 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:44.937 00:37:37 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:44.937 00:37:37 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:03:45.199 00:37:37 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:03:45.199 00:37:37 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:c9:00.0/device 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:45.199 00:37:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:45.199 00:37:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:ca:00.0/device 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:45.199 00:37:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:45.199 00:37:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:cb:00.0/device 00:03:45.199 00:37:37 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:45.199 00:37:37 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:45.199 00:37:37 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:45.199 00:37:37 -- common/autotest_common.sh@1572 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:03:45.199 00:37:37 -- common/autotest_common.sh@1578 -- # [[ -z 0000:c9:00.0 ]] 00:03:45.199 00:37:37 -- common/autotest_common.sh@1583 -- # spdk_tgt_pid=2535720 00:03:45.199 00:37:37 -- common/autotest_common.sh@1584 -- # waitforlisten 2535720 00:03:45.199 00:37:37 -- common/autotest_common.sh@1582 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:03:45.199 00:37:37 -- common/autotest_common.sh@817 -- # '[' -z 2535720 ']' 00:03:45.199 00:37:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:45.199 00:37:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:03:45.199 00:37:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:45.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:45.199 00:37:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:03:45.199 00:37:37 -- common/autotest_common.sh@10 -- # set +x 00:03:45.199 [2024-04-27 00:37:37.808108] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:03:45.199 [2024-04-27 00:37:37.808265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2535720 ] 00:03:45.199 EAL: No free 2048 kB hugepages reported on node 1 00:03:45.459 [2024-04-27 00:37:37.942016] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:45.459 [2024-04-27 00:37:38.042349] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:03:46.030 00:37:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:03:46.030 00:37:38 -- common/autotest_common.sh@850 -- # return 0 00:03:46.030 00:37:38 -- common/autotest_common.sh@1586 -- # bdf_id=0 00:03:46.030 00:37:38 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:46.030 00:37:38 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:c9:00.0 00:03:49.328 nvme0n1 00:03:49.328 00:37:41 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:49.328 [2024-04-27 00:37:41.643899] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:49.328 request: 00:03:49.328 { 00:03:49.328 "nvme_ctrlr_name": "nvme0", 00:03:49.328 "password": "test", 00:03:49.328 "method": "bdev_nvme_opal_revert", 00:03:49.328 "req_id": 1 00:03:49.328 } 00:03:49.328 Got JSON-RPC error response 00:03:49.328 response: 00:03:49.328 { 00:03:49.328 "code": -32602, 00:03:49.328 "message": "Invalid parameters" 00:03:49.328 } 00:03:49.328 00:37:41 -- common/autotest_common.sh@1590 -- # true 00:03:49.328 00:37:41 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:49.328 00:37:41 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:49.328 00:37:41 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme1 -t pcie -a 0000:ca:00.0 00:03:52.625 nvme1n1 00:03:52.625 00:37:44 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme1 -p test 00:03:52.625 [2024-04-27 00:37:44.787251] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme1 not support opal 00:03:52.625 request: 00:03:52.625 { 00:03:52.625 "nvme_ctrlr_name": "nvme1", 00:03:52.625 "password": "test", 00:03:52.625 "method": "bdev_nvme_opal_revert", 00:03:52.625 "req_id": 1 00:03:52.625 } 00:03:52.625 Got JSON-RPC error response 00:03:52.625 response: 00:03:52.625 { 00:03:52.625 "code": -32602, 00:03:52.625 "message": "Invalid parameters" 00:03:52.625 } 00:03:52.625 00:37:44 -- common/autotest_common.sh@1590 -- # true 00:03:52.625 00:37:44 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:52.625 00:37:44 -- common/autotest_common.sh@1587 -- # for bdf in "${bdfs[@]}" 00:03:52.625 00:37:44 -- common/autotest_common.sh@1588 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme2 -t pcie -a 0000:cb:00.0 00:03:55.163 nvme2n1 00:03:55.163 00:37:47 -- common/autotest_common.sh@1590 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme2 -p test 00:03:55.423 [2024-04-27 00:37:47.934451] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme2 not support opal 00:03:55.423 request: 00:03:55.423 { 00:03:55.423 "nvme_ctrlr_name": "nvme2", 00:03:55.423 "password": "test", 00:03:55.423 "method": "bdev_nvme_opal_revert", 00:03:55.423 "req_id": 1 00:03:55.423 } 00:03:55.423 Got JSON-RPC error response 00:03:55.423 response: 00:03:55.423 { 00:03:55.423 "code": -32602, 00:03:55.423 "message": "Invalid parameters" 00:03:55.423 } 00:03:55.423 00:37:47 -- common/autotest_common.sh@1590 -- # true 00:03:55.423 00:37:47 -- common/autotest_common.sh@1591 -- # (( ++bdf_id )) 00:03:55.423 00:37:47 -- common/autotest_common.sh@1594 -- # killprocess 2535720 00:03:55.424 00:37:47 -- common/autotest_common.sh@936 -- # '[' -z 2535720 ']' 00:03:55.424 00:37:47 -- common/autotest_common.sh@940 -- # kill -0 2535720 00:03:55.424 00:37:47 -- common/autotest_common.sh@941 -- # uname 00:03:55.424 00:37:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:03:55.424 00:37:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2535720 00:03:55.424 00:37:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:03:55.424 00:37:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:03:55.424 00:37:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2535720' 00:03:55.424 killing process with pid 2535720 00:03:55.424 00:37:48 -- common/autotest_common.sh@955 -- # kill 2535720 00:03:55.424 00:37:48 -- common/autotest_common.sh@960 -- # wait 2535720 00:03:59.617 00:37:51 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:03:59.617 00:37:51 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:03:59.617 00:37:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.617 00:37:51 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:03:59.617 00:37:51 -- spdk/autotest.sh@162 -- # timing_enter lib 00:03:59.617 00:37:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:03:59.617 00:37:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.617 00:37:51 -- spdk/autotest.sh@164 -- # run_test env /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:59.617 00:37:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.617 00:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.617 00:37:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.617 ************************************ 00:03:59.617 START TEST env 00:03:59.617 ************************************ 00:03:59.617 00:37:51 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env.sh 00:03:59.617 * Looking for test storage... 00:03:59.617 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env 00:03:59.617 00:37:51 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.617 00:37:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.617 00:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.617 00:37:51 -- common/autotest_common.sh@10 -- # set +x 00:03:59.617 ************************************ 00:03:59.617 START TEST env_memory 00:03:59.617 ************************************ 00:03:59.617 00:37:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/memory/memory_ut 00:03:59.617 00:03:59.617 00:03:59.617 CUnit - A unit testing framework for C - Version 2.1-3 00:03:59.617 http://cunit.sourceforge.net/ 00:03:59.617 00:03:59.617 00:03:59.617 Suite: memory 00:03:59.617 Test: alloc and free memory map ...[2024-04-27 00:37:52.134898] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:59.617 passed 00:03:59.617 Test: mem map translation ...[2024-04-27 00:37:52.181972] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:59.617 [2024-04-27 00:37:52.182006] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:59.617 [2024-04-27 00:37:52.182086] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:59.617 [2024-04-27 00:37:52.182108] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:59.617 passed 00:03:59.617 Test: mem map registration ...[2024-04-27 00:37:52.268741] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:03:59.617 [2024-04-27 00:37:52.268773] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:03:59.617 passed 00:03:59.877 Test: mem map adjacent registrations ...passed 00:03:59.877 00:03:59.877 Run Summary: Type Total Ran Passed Failed Inactive 00:03:59.877 suites 1 1 n/a 0 0 00:03:59.877 tests 4 4 4 0 0 00:03:59.877 asserts 152 152 152 0 n/a 00:03:59.877 00:03:59.877 Elapsed time = 0.293 seconds 00:03:59.877 00:03:59.877 real 0m0.316s 00:03:59.877 user 0m0.300s 00:03:59.878 sys 0m0.014s 00:03:59.878 00:37:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:03:59.878 00:37:52 -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 ************************************ 00:03:59.878 END TEST env_memory 00:03:59.878 ************************************ 00:03:59.878 00:37:52 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.878 00:37:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:59.878 00:37:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:59.878 00:37:52 -- common/autotest_common.sh@10 -- # set +x 00:03:59.878 ************************************ 00:03:59.878 START TEST env_vtophys 00:03:59.878 ************************************ 00:03:59.878 00:37:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:59.878 EAL: lib.eal log level changed from notice to debug 00:03:59.878 EAL: Detected lcore 0 as core 0 on socket 0 00:03:59.878 EAL: Detected lcore 1 as core 1 on socket 0 00:03:59.878 EAL: Detected lcore 2 as core 2 on socket 0 00:03:59.878 EAL: Detected lcore 3 as core 3 on socket 0 00:03:59.878 EAL: Detected lcore 4 as core 4 on socket 0 00:03:59.878 EAL: Detected lcore 5 as core 5 on socket 0 00:03:59.878 EAL: Detected lcore 6 as core 6 on socket 0 00:03:59.878 EAL: Detected lcore 7 as core 7 on socket 0 00:03:59.878 EAL: Detected lcore 8 as core 8 on socket 0 00:03:59.878 EAL: Detected lcore 9 as core 9 on socket 0 00:03:59.878 EAL: Detected lcore 10 as core 10 on socket 0 00:03:59.878 EAL: Detected lcore 11 as core 11 on socket 0 00:03:59.878 EAL: Detected lcore 12 as core 12 on socket 0 00:03:59.878 EAL: Detected lcore 13 as core 13 on socket 0 00:03:59.878 EAL: Detected lcore 14 as core 14 on socket 0 00:03:59.878 EAL: Detected lcore 15 as core 15 on socket 0 00:03:59.878 EAL: Detected lcore 16 as core 16 on socket 0 00:03:59.878 EAL: Detected lcore 17 as core 17 on socket 0 00:03:59.878 EAL: Detected lcore 18 as core 18 on socket 0 00:03:59.878 EAL: Detected lcore 19 as core 19 on socket 0 00:03:59.878 EAL: Detected lcore 20 as core 20 on socket 0 00:03:59.878 EAL: Detected lcore 21 as core 21 on socket 0 00:03:59.878 EAL: Detected lcore 22 as core 22 on socket 0 00:03:59.878 EAL: Detected lcore 23 as core 23 on socket 0 00:03:59.878 EAL: Detected lcore 24 as core 24 on socket 0 00:03:59.878 EAL: Detected lcore 25 as core 25 on socket 0 00:03:59.878 EAL: Detected lcore 26 as core 26 on socket 0 00:03:59.878 EAL: Detected lcore 27 as core 27 on socket 0 00:03:59.878 EAL: Detected lcore 28 as core 28 on socket 0 00:03:59.878 EAL: Detected lcore 29 as core 29 on socket 0 00:03:59.878 EAL: Detected lcore 30 as core 30 on socket 0 00:03:59.878 EAL: Detected lcore 31 as core 31 on socket 0 00:03:59.878 EAL: Detected lcore 32 as core 0 on socket 1 00:03:59.878 EAL: Detected lcore 33 as core 1 on socket 1 00:03:59.878 EAL: Detected lcore 34 as core 2 on socket 1 00:03:59.878 EAL: Detected lcore 35 as core 3 on socket 1 00:03:59.878 EAL: Detected lcore 36 as core 4 on socket 1 00:03:59.878 EAL: Detected lcore 37 as core 5 on socket 1 00:03:59.878 EAL: Detected lcore 38 as core 6 on socket 1 00:03:59.878 EAL: Detected lcore 39 as core 7 on socket 1 00:03:59.878 EAL: Detected lcore 40 as core 8 on socket 1 00:03:59.878 EAL: Detected lcore 41 as core 9 on socket 1 00:03:59.878 EAL: Detected lcore 42 as core 10 on socket 1 00:03:59.878 EAL: Detected lcore 43 as core 11 on socket 1 00:03:59.878 EAL: Detected lcore 44 as core 12 on socket 1 00:03:59.878 EAL: Detected lcore 45 as core 13 on socket 1 00:03:59.878 EAL: Detected lcore 46 as core 14 on socket 1 00:03:59.878 EAL: Detected lcore 47 as core 15 on socket 1 00:03:59.878 EAL: Detected lcore 48 as core 16 on socket 1 00:03:59.878 EAL: Detected lcore 49 as core 17 on socket 1 00:03:59.878 EAL: Detected lcore 50 as core 18 on socket 1 00:03:59.878 EAL: Detected lcore 51 as core 19 on socket 1 00:03:59.878 EAL: Detected lcore 52 as core 20 on socket 1 00:03:59.878 EAL: Detected lcore 53 as core 21 on socket 1 00:03:59.878 EAL: Detected lcore 54 as core 22 on socket 1 00:03:59.878 EAL: Detected lcore 55 as core 23 on socket 1 00:03:59.878 EAL: Detected lcore 56 as core 24 on socket 1 00:03:59.878 EAL: Detected lcore 57 as core 25 on socket 1 00:03:59.878 EAL: Detected lcore 58 as core 26 on socket 1 00:03:59.878 EAL: Detected lcore 59 as core 27 on socket 1 00:03:59.878 EAL: Detected lcore 60 as core 28 on socket 1 00:03:59.878 EAL: Detected lcore 61 as core 29 on socket 1 00:03:59.878 EAL: Detected lcore 62 as core 30 on socket 1 00:03:59.878 EAL: Detected lcore 63 as core 31 on socket 1 00:03:59.878 EAL: Detected lcore 64 as core 0 on socket 0 00:03:59.878 EAL: Detected lcore 65 as core 1 on socket 0 00:03:59.878 EAL: Detected lcore 66 as core 2 on socket 0 00:03:59.878 EAL: Detected lcore 67 as core 3 on socket 0 00:03:59.878 EAL: Detected lcore 68 as core 4 on socket 0 00:03:59.878 EAL: Detected lcore 69 as core 5 on socket 0 00:03:59.878 EAL: Detected lcore 70 as core 6 on socket 0 00:03:59.878 EAL: Detected lcore 71 as core 7 on socket 0 00:03:59.878 EAL: Detected lcore 72 as core 8 on socket 0 00:03:59.878 EAL: Detected lcore 73 as core 9 on socket 0 00:03:59.878 EAL: Detected lcore 74 as core 10 on socket 0 00:03:59.878 EAL: Detected lcore 75 as core 11 on socket 0 00:03:59.878 EAL: Detected lcore 76 as core 12 on socket 0 00:03:59.878 EAL: Detected lcore 77 as core 13 on socket 0 00:03:59.878 EAL: Detected lcore 78 as core 14 on socket 0 00:03:59.878 EAL: Detected lcore 79 as core 15 on socket 0 00:03:59.878 EAL: Detected lcore 80 as core 16 on socket 0 00:03:59.878 EAL: Detected lcore 81 as core 17 on socket 0 00:03:59.878 EAL: Detected lcore 82 as core 18 on socket 0 00:03:59.878 EAL: Detected lcore 83 as core 19 on socket 0 00:03:59.878 EAL: Detected lcore 84 as core 20 on socket 0 00:03:59.878 EAL: Detected lcore 85 as core 21 on socket 0 00:03:59.878 EAL: Detected lcore 86 as core 22 on socket 0 00:03:59.878 EAL: Detected lcore 87 as core 23 on socket 0 00:03:59.878 EAL: Detected lcore 88 as core 24 on socket 0 00:03:59.878 EAL: Detected lcore 89 as core 25 on socket 0 00:03:59.878 EAL: Detected lcore 90 as core 26 on socket 0 00:03:59.878 EAL: Detected lcore 91 as core 27 on socket 0 00:03:59.878 EAL: Detected lcore 92 as core 28 on socket 0 00:03:59.878 EAL: Detected lcore 93 as core 29 on socket 0 00:03:59.878 EAL: Detected lcore 94 as core 30 on socket 0 00:03:59.878 EAL: Detected lcore 95 as core 31 on socket 0 00:03:59.878 EAL: Detected lcore 96 as core 0 on socket 1 00:03:59.878 EAL: Detected lcore 97 as core 1 on socket 1 00:03:59.878 EAL: Detected lcore 98 as core 2 on socket 1 00:03:59.878 EAL: Detected lcore 99 as core 3 on socket 1 00:03:59.878 EAL: Detected lcore 100 as core 4 on socket 1 00:03:59.878 EAL: Detected lcore 101 as core 5 on socket 1 00:03:59.878 EAL: Detected lcore 102 as core 6 on socket 1 00:03:59.878 EAL: Detected lcore 103 as core 7 on socket 1 00:03:59.878 EAL: Detected lcore 104 as core 8 on socket 1 00:03:59.878 EAL: Detected lcore 105 as core 9 on socket 1 00:03:59.878 EAL: Detected lcore 106 as core 10 on socket 1 00:03:59.878 EAL: Detected lcore 107 as core 11 on socket 1 00:03:59.878 EAL: Detected lcore 108 as core 12 on socket 1 00:03:59.878 EAL: Detected lcore 109 as core 13 on socket 1 00:03:59.878 EAL: Detected lcore 110 as core 14 on socket 1 00:03:59.878 EAL: Detected lcore 111 as core 15 on socket 1 00:03:59.878 EAL: Detected lcore 112 as core 16 on socket 1 00:03:59.878 EAL: Detected lcore 113 as core 17 on socket 1 00:03:59.878 EAL: Detected lcore 114 as core 18 on socket 1 00:03:59.878 EAL: Detected lcore 115 as core 19 on socket 1 00:03:59.878 EAL: Detected lcore 116 as core 20 on socket 1 00:03:59.878 EAL: Detected lcore 117 as core 21 on socket 1 00:03:59.878 EAL: Detected lcore 118 as core 22 on socket 1 00:03:59.878 EAL: Detected lcore 119 as core 23 on socket 1 00:03:59.878 EAL: Detected lcore 120 as core 24 on socket 1 00:03:59.878 EAL: Detected lcore 121 as core 25 on socket 1 00:03:59.878 EAL: Detected lcore 122 as core 26 on socket 1 00:03:59.878 EAL: Detected lcore 123 as core 27 on socket 1 00:03:59.878 EAL: Detected lcore 124 as core 28 on socket 1 00:03:59.878 EAL: Detected lcore 125 as core 29 on socket 1 00:03:59.878 EAL: Detected lcore 126 as core 30 on socket 1 00:03:59.878 EAL: Detected lcore 127 as core 31 on socket 1 00:03:59.878 EAL: Maximum logical cores by configuration: 128 00:03:59.878 EAL: Detected CPU lcores: 128 00:03:59.878 EAL: Detected NUMA nodes: 2 00:03:59.878 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:03:59.878 EAL: Detected shared linkage of DPDK 00:03:59.878 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.136 EAL: Bus pci wants IOVA as 'DC' 00:04:00.136 EAL: Buses did not request a specific IOVA mode. 00:04:00.136 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:00.136 EAL: Selected IOVA mode 'VA' 00:04:00.136 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.136 EAL: Probing VFIO support... 00:04:00.136 EAL: IOMMU type 1 (Type 1) is supported 00:04:00.136 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:00.136 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:00.136 EAL: VFIO support initialized 00:04:00.136 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.136 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.136 EAL: Setting up physically contiguous memory... 00:04:00.136 EAL: Setting maximum number of open files to 524288 00:04:00.136 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.136 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:00.136 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.136 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:00.136 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.136 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:00.136 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.136 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.136 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:00.136 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:00.136 EAL: Hugepages will be freed exactly as allocated. 00:04:00.136 EAL: No shared files mode enabled, IPC is disabled 00:04:00.136 EAL: No shared files mode enabled, IPC is disabled 00:04:00.136 EAL: TSC frequency is ~1900000 KHz 00:04:00.136 EAL: Main lcore 0 is ready (tid=7f723c091a40;cpuset=[0]) 00:04:00.136 EAL: Trying to obtain current memory policy. 00:04:00.136 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.136 EAL: Restoring previous memory policy: 0 00:04:00.136 EAL: request: mp_malloc_sync 00:04:00.136 EAL: No shared files mode enabled, IPC is disabled 00:04:00.136 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.136 EAL: No shared files mode enabled, IPC is disabled 00:04:00.136 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.136 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.136 00:04:00.136 00:04:00.136 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.136 http://cunit.sourceforge.net/ 00:04:00.136 00:04:00.136 00:04:00.136 Suite: components_suite 00:04:00.397 Test: vtophys_malloc_test ...passed 00:04:00.397 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.397 EAL: Restoring previous memory policy: 4 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.397 EAL: Trying to obtain current memory policy. 00:04:00.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.397 EAL: Restoring previous memory policy: 4 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.397 EAL: Trying to obtain current memory policy. 00:04:00.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.397 EAL: Restoring previous memory policy: 4 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.397 EAL: No shared files mode enabled, IPC is disabled 00:04:00.397 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.397 EAL: Trying to obtain current memory policy. 00:04:00.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.397 EAL: Restoring previous memory policy: 4 00:04:00.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.397 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.398 EAL: Trying to obtain current memory policy. 00:04:00.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.398 EAL: Restoring previous memory policy: 4 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.398 EAL: Trying to obtain current memory policy. 00:04:00.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.398 EAL: Restoring previous memory policy: 4 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.398 EAL: Trying to obtain current memory policy. 00:04:00.398 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.398 EAL: Restoring previous memory policy: 4 00:04:00.398 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.398 EAL: request: mp_malloc_sync 00:04:00.398 EAL: No shared files mode enabled, IPC is disabled 00:04:00.398 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.661 EAL: request: mp_malloc_sync 00:04:00.661 EAL: No shared files mode enabled, IPC is disabled 00:04:00.661 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.661 EAL: Trying to obtain current memory policy. 00:04:00.661 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.661 EAL: Restoring previous memory policy: 4 00:04:00.661 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.661 EAL: request: mp_malloc_sync 00:04:00.661 EAL: No shared files mode enabled, IPC is disabled 00:04:00.661 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.920 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.920 EAL: request: mp_malloc_sync 00:04:00.920 EAL: No shared files mode enabled, IPC is disabled 00:04:00.920 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.920 EAL: Trying to obtain current memory policy. 00:04:00.920 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.181 EAL: Restoring previous memory policy: 4 00:04:01.181 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.181 EAL: request: mp_malloc_sync 00:04:01.181 EAL: No shared files mode enabled, IPC is disabled 00:04:01.181 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.440 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.440 EAL: request: mp_malloc_sync 00:04:01.440 EAL: No shared files mode enabled, IPC is disabled 00:04:01.440 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.698 EAL: Trying to obtain current memory policy. 00:04:01.698 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.698 EAL: Restoring previous memory policy: 4 00:04:01.698 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.698 EAL: request: mp_malloc_sync 00:04:01.698 EAL: No shared files mode enabled, IPC is disabled 00:04:01.698 EAL: Heap on socket 0 was expanded by 1026MB 00:04:02.638 EAL: Calling mem event callback 'spdk:(nil)' 00:04:02.638 EAL: request: mp_malloc_sync 00:04:02.638 EAL: No shared files mode enabled, IPC is disabled 00:04:02.638 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:03.205 passed 00:04:03.205 00:04:03.205 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.205 suites 1 1 n/a 0 0 00:04:03.205 tests 2 2 2 0 0 00:04:03.205 asserts 497 497 497 0 n/a 00:04:03.205 00:04:03.205 Elapsed time = 2.901 seconds 00:04:03.205 EAL: Calling mem event callback 'spdk:(nil)' 00:04:03.205 EAL: request: mp_malloc_sync 00:04:03.205 EAL: No shared files mode enabled, IPC is disabled 00:04:03.205 EAL: Heap on socket 0 was shrunk by 2MB 00:04:03.205 EAL: No shared files mode enabled, IPC is disabled 00:04:03.205 EAL: No shared files mode enabled, IPC is disabled 00:04:03.205 EAL: No shared files mode enabled, IPC is disabled 00:04:03.205 00:04:03.205 real 0m3.113s 00:04:03.205 user 0m2.471s 00:04:03.205 sys 0m0.604s 00:04:03.205 00:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.205 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.205 ************************************ 00:04:03.205 END TEST env_vtophys 00:04:03.205 ************************************ 00:04:03.205 00:37:55 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.205 00:37:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:03.205 00:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.205 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.205 ************************************ 00:04:03.205 START TEST env_pci 00:04:03.205 ************************************ 00:04:03.205 00:37:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/pci/pci_ut 00:04:03.205 00:04:03.205 00:04:03.205 CUnit - A unit testing framework for C - Version 2.1-3 00:04:03.205 http://cunit.sourceforge.net/ 00:04:03.205 00:04:03.205 00:04:03.205 Suite: pci 00:04:03.205 Test: pci_hook ...[2024-04-27 00:37:55.772559] /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2539396 has claimed it 00:04:03.205 EAL: Cannot find device (10000:00:01.0) 00:04:03.205 EAL: Failed to attach device on primary process 00:04:03.206 passed 00:04:03.206 00:04:03.206 Run Summary: Type Total Ran Passed Failed Inactive 00:04:03.206 suites 1 1 n/a 0 0 00:04:03.206 tests 1 1 1 0 0 00:04:03.206 asserts 25 25 25 0 n/a 00:04:03.206 00:04:03.206 Elapsed time = 0.052 seconds 00:04:03.206 00:04:03.206 real 0m0.105s 00:04:03.206 user 0m0.035s 00:04:03.206 sys 0m0.069s 00:04:03.206 00:37:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:03.206 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.206 ************************************ 00:04:03.206 END TEST env_pci 00:04:03.206 ************************************ 00:04:03.206 00:37:55 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:03.206 00:37:55 -- env/env.sh@15 -- # uname 00:04:03.206 00:37:55 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:03.206 00:37:55 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:03.206 00:37:55 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.206 00:37:55 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:04:03.206 00:37:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:03.206 00:37:55 -- common/autotest_common.sh@10 -- # set +x 00:04:03.465 ************************************ 00:04:03.465 START TEST env_dpdk_post_init 00:04:03.465 ************************************ 00:04:03.465 00:37:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:03.465 EAL: Detected CPU lcores: 128 00:04:03.465 EAL: Detected NUMA nodes: 2 00:04:03.465 EAL: Detected shared linkage of DPDK 00:04:03.465 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:03.465 EAL: Selected IOVA mode 'VA' 00:04:03.465 EAL: No free 2048 kB hugepages reported on node 1 00:04:03.465 EAL: VFIO support initialized 00:04:03.465 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:03.724 EAL: Using IOMMU type 1 (Type 1) 00:04:03.724 EAL: Ignore mapping IO port bar(1) 00:04:03.724 EAL: Ignore mapping IO port bar(3) 00:04:03.724 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6a:01.0 (socket 0) 00:04:03.981 EAL: Ignore mapping IO port bar(1) 00:04:03.981 EAL: Ignore mapping IO port bar(3) 00:04:03.981 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6a:02.0 (socket 0) 00:04:04.240 EAL: Ignore mapping IO port bar(1) 00:04:04.240 EAL: Ignore mapping IO port bar(3) 00:04:04.240 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:6f:01.0 (socket 0) 00:04:04.500 EAL: Ignore mapping IO port bar(1) 00:04:04.500 EAL: Ignore mapping IO port bar(3) 00:04:04.500 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:6f:02.0 (socket 0) 00:04:04.500 EAL: Ignore mapping IO port bar(1) 00:04:04.500 EAL: Ignore mapping IO port bar(3) 00:04:04.758 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:74:01.0 (socket 0) 00:04:04.758 EAL: Ignore mapping IO port bar(1) 00:04:04.758 EAL: Ignore mapping IO port bar(3) 00:04:05.019 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:74:02.0 (socket 0) 00:04:05.019 EAL: Ignore mapping IO port bar(1) 00:04:05.019 EAL: Ignore mapping IO port bar(3) 00:04:05.279 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:79:01.0 (socket 0) 00:04:05.279 EAL: Ignore mapping IO port bar(1) 00:04:05.279 EAL: Ignore mapping IO port bar(3) 00:04:05.279 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:79:02.0 (socket 0) 00:04:06.218 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:c9:00.0 (socket 1) 00:04:06.789 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:ca:00.0 (socket 1) 00:04:07.726 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:cb:00.0 (socket 1) 00:04:07.726 EAL: Ignore mapping IO port bar(1) 00:04:07.726 EAL: Ignore mapping IO port bar(3) 00:04:07.726 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:e7:01.0 (socket 1) 00:04:07.987 EAL: Ignore mapping IO port bar(1) 00:04:07.987 EAL: Ignore mapping IO port bar(3) 00:04:07.987 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:e7:02.0 (socket 1) 00:04:08.251 EAL: Ignore mapping IO port bar(1) 00:04:08.251 EAL: Ignore mapping IO port bar(3) 00:04:08.251 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:ec:01.0 (socket 1) 00:04:08.587 EAL: Ignore mapping IO port bar(1) 00:04:08.587 EAL: Ignore mapping IO port bar(3) 00:04:08.587 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:ec:02.0 (socket 1) 00:04:08.587 EAL: Ignore mapping IO port bar(1) 00:04:08.587 EAL: Ignore mapping IO port bar(3) 00:04:08.848 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f1:01.0 (socket 1) 00:04:08.848 EAL: Ignore mapping IO port bar(1) 00:04:08.848 EAL: Ignore mapping IO port bar(3) 00:04:08.848 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f1:02.0 (socket 1) 00:04:09.107 EAL: Ignore mapping IO port bar(1) 00:04:09.107 EAL: Ignore mapping IO port bar(3) 00:04:09.107 EAL: Probe PCI driver: spdk_idxd (8086:0b25) device: 0000:f6:01.0 (socket 1) 00:04:09.366 EAL: Ignore mapping IO port bar(1) 00:04:09.366 EAL: Ignore mapping IO port bar(3) 00:04:09.366 EAL: Probe PCI driver: spdk_idxd (8086:0cfe) device: 0000:f6:02.0 (socket 1) 00:04:13.558 EAL: Releasing PCI mapped resource for 0000:ca:00.0 00:04:13.558 EAL: Calling pci_unmap_resource for 0000:ca:00.0 at 0x202001184000 00:04:13.558 EAL: Releasing PCI mapped resource for 0000:cb:00.0 00:04:13.558 EAL: Calling pci_unmap_resource for 0000:cb:00.0 at 0x202001188000 00:04:14.128 EAL: Releasing PCI mapped resource for 0000:c9:00.0 00:04:14.128 EAL: Calling pci_unmap_resource for 0000:c9:00.0 at 0x202001180000 00:04:14.388 Starting DPDK initialization... 00:04:14.388 Starting SPDK post initialization... 00:04:14.388 SPDK NVMe probe 00:04:14.388 Attaching to 0000:c9:00.0 00:04:14.388 Attaching to 0000:ca:00.0 00:04:14.388 Attaching to 0000:cb:00.0 00:04:14.388 Attached to 0000:c9:00.0 00:04:14.388 Attached to 0000:cb:00.0 00:04:14.388 Attached to 0000:ca:00.0 00:04:14.388 Cleaning up... 00:04:16.295 00:04:16.295 real 0m12.635s 00:04:16.295 user 0m5.007s 00:04:16.295 sys 0m0.185s 00:04:16.295 00:38:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.295 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.295 ************************************ 00:04:16.295 END TEST env_dpdk_post_init 00:04:16.295 ************************************ 00:04:16.295 00:38:08 -- env/env.sh@26 -- # uname 00:04:16.295 00:38:08 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:16.295 00:38:08 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.295 00:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.295 00:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.295 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.295 ************************************ 00:04:16.295 START TEST env_mem_callbacks 00:04:16.295 ************************************ 00:04:16.295 00:38:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:16.295 EAL: Detected CPU lcores: 128 00:04:16.295 EAL: Detected NUMA nodes: 2 00:04:16.295 EAL: Detected shared linkage of DPDK 00:04:16.295 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:16.295 EAL: Selected IOVA mode 'VA' 00:04:16.295 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.295 EAL: VFIO support initialized 00:04:16.295 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:16.295 00:04:16.295 00:04:16.295 CUnit - A unit testing framework for C - Version 2.1-3 00:04:16.295 http://cunit.sourceforge.net/ 00:04:16.295 00:04:16.295 00:04:16.295 Suite: memory 00:04:16.295 Test: test ... 00:04:16.295 register 0x200000200000 2097152 00:04:16.295 malloc 3145728 00:04:16.295 register 0x200000400000 4194304 00:04:16.295 buf 0x2000004fffc0 len 3145728 PASSED 00:04:16.295 malloc 64 00:04:16.295 buf 0x2000004ffec0 len 64 PASSED 00:04:16.295 malloc 4194304 00:04:16.295 register 0x200000800000 6291456 00:04:16.295 buf 0x2000009fffc0 len 4194304 PASSED 00:04:16.295 free 0x2000004fffc0 3145728 00:04:16.295 free 0x2000004ffec0 64 00:04:16.295 unregister 0x200000400000 4194304 PASSED 00:04:16.295 free 0x2000009fffc0 4194304 00:04:16.295 unregister 0x200000800000 6291456 PASSED 00:04:16.295 malloc 8388608 00:04:16.295 register 0x200000400000 10485760 00:04:16.295 buf 0x2000005fffc0 len 8388608 PASSED 00:04:16.295 free 0x2000005fffc0 8388608 00:04:16.295 unregister 0x200000400000 10485760 PASSED 00:04:16.295 passed 00:04:16.295 00:04:16.295 Run Summary: Type Total Ran Passed Failed Inactive 00:04:16.295 suites 1 1 n/a 0 0 00:04:16.295 tests 1 1 1 0 0 00:04:16.295 asserts 15 15 15 0 n/a 00:04:16.295 00:04:16.295 Elapsed time = 0.022 seconds 00:04:16.295 00:04:16.295 real 0m0.157s 00:04:16.295 user 0m0.056s 00:04:16.295 sys 0m0.100s 00:04:16.295 00:38:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.295 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.295 ************************************ 00:04:16.295 END TEST env_mem_callbacks 00:04:16.295 ************************************ 00:04:16.295 00:04:16.295 real 0m17.026s 00:04:16.295 user 0m8.105s 00:04:16.295 sys 0m1.413s 00:04:16.295 00:38:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:16.295 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.295 ************************************ 00:04:16.295 END TEST env 00:04:16.295 ************************************ 00:04:16.295 00:38:08 -- spdk/autotest.sh@165 -- # run_test rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.295 00:38:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:16.295 00:38:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:16.295 00:38:08 -- common/autotest_common.sh@10 -- # set +x 00:04:16.554 ************************************ 00:04:16.554 START TEST rpc 00:04:16.554 ************************************ 00:04:16.554 00:38:09 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/rpc.sh 00:04:16.554 * Looking for test storage... 00:04:16.554 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:16.554 00:38:09 -- rpc/rpc.sh@65 -- # spdk_pid=2542064 00:04:16.554 00:38:09 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:16.554 00:38:09 -- rpc/rpc.sh@67 -- # waitforlisten 2542064 00:04:16.554 00:38:09 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:16.554 00:38:09 -- common/autotest_common.sh@817 -- # '[' -z 2542064 ']' 00:04:16.555 00:38:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:16.555 00:38:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:16.555 00:38:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:16.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:16.555 00:38:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:16.555 00:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:16.555 [2024-04-27 00:38:09.201716] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:16.555 [2024-04-27 00:38:09.201828] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2542064 ] 00:04:16.813 EAL: No free 2048 kB hugepages reported on node 1 00:04:16.813 [2024-04-27 00:38:09.324410] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:16.813 [2024-04-27 00:38:09.417989] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:16.813 [2024-04-27 00:38:09.418025] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2542064' to capture a snapshot of events at runtime. 00:04:16.813 [2024-04-27 00:38:09.418038] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:16.813 [2024-04-27 00:38:09.418046] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:16.813 [2024-04-27 00:38:09.418055] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2542064 for offline analysis/debug. 00:04:16.813 [2024-04-27 00:38:09.418091] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:17.381 00:38:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:17.381 00:38:09 -- common/autotest_common.sh@850 -- # return 0 00:04:17.381 00:38:09 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:17.381 00:38:09 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:17.381 00:38:09 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:17.381 00:38:09 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:17.381 00:38:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.382 00:38:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.382 00:38:09 -- common/autotest_common.sh@10 -- # set +x 00:04:17.382 ************************************ 00:04:17.382 START TEST rpc_integrity 00:04:17.382 ************************************ 00:04:17.382 00:38:10 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:17.382 00:38:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.382 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.382 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.382 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.382 00:38:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.382 00:38:10 -- rpc/rpc.sh@13 -- # jq length 00:04:17.642 00:38:10 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.642 00:38:10 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.642 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.642 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.642 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.642 00:38:10 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:17.642 00:38:10 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.642 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.642 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.642 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.642 00:38:10 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.642 { 00:04:17.642 "name": "Malloc0", 00:04:17.642 "aliases": [ 00:04:17.642 "67f27e85-0e88-4b13-b1ea-eecd2a923138" 00:04:17.642 ], 00:04:17.642 "product_name": "Malloc disk", 00:04:17.642 "block_size": 512, 00:04:17.642 "num_blocks": 16384, 00:04:17.643 "uuid": "67f27e85-0e88-4b13-b1ea-eecd2a923138", 00:04:17.643 "assigned_rate_limits": { 00:04:17.643 "rw_ios_per_sec": 0, 00:04:17.643 "rw_mbytes_per_sec": 0, 00:04:17.643 "r_mbytes_per_sec": 0, 00:04:17.643 "w_mbytes_per_sec": 0 00:04:17.643 }, 00:04:17.643 "claimed": false, 00:04:17.643 "zoned": false, 00:04:17.643 "supported_io_types": { 00:04:17.643 "read": true, 00:04:17.643 "write": true, 00:04:17.643 "unmap": true, 00:04:17.643 "write_zeroes": true, 00:04:17.643 "flush": true, 00:04:17.643 "reset": true, 00:04:17.643 "compare": false, 00:04:17.643 "compare_and_write": false, 00:04:17.643 "abort": true, 00:04:17.643 "nvme_admin": false, 00:04:17.643 "nvme_io": false 00:04:17.643 }, 00:04:17.643 "memory_domains": [ 00:04:17.643 { 00:04:17.643 "dma_device_id": "system", 00:04:17.643 "dma_device_type": 1 00:04:17.643 }, 00:04:17.643 { 00:04:17.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.643 "dma_device_type": 2 00:04:17.643 } 00:04:17.643 ], 00:04:17.643 "driver_specific": {} 00:04:17.643 } 00:04:17.643 ]' 00:04:17.643 00:38:10 -- rpc/rpc.sh@17 -- # jq length 00:04:17.643 00:38:10 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.643 00:38:10 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:17.643 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 [2024-04-27 00:38:10.157089] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:17.643 [2024-04-27 00:38:10.157135] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.643 [2024-04-27 00:38:10.157167] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000020180 00:04:17.643 [2024-04-27 00:38:10.157177] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.643 [2024-04-27 00:38:10.158911] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.643 [2024-04-27 00:38:10.158935] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.643 Passthru0 00:04:17.643 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.643 00:38:10 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.643 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.643 00:38:10 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.643 { 00:04:17.643 "name": "Malloc0", 00:04:17.643 "aliases": [ 00:04:17.643 "67f27e85-0e88-4b13-b1ea-eecd2a923138" 00:04:17.643 ], 00:04:17.643 "product_name": "Malloc disk", 00:04:17.643 "block_size": 512, 00:04:17.643 "num_blocks": 16384, 00:04:17.643 "uuid": "67f27e85-0e88-4b13-b1ea-eecd2a923138", 00:04:17.643 "assigned_rate_limits": { 00:04:17.643 "rw_ios_per_sec": 0, 00:04:17.643 "rw_mbytes_per_sec": 0, 00:04:17.643 "r_mbytes_per_sec": 0, 00:04:17.643 "w_mbytes_per_sec": 0 00:04:17.643 }, 00:04:17.643 "claimed": true, 00:04:17.643 "claim_type": "exclusive_write", 00:04:17.643 "zoned": false, 00:04:17.643 "supported_io_types": { 00:04:17.643 "read": true, 00:04:17.643 "write": true, 00:04:17.643 "unmap": true, 00:04:17.643 "write_zeroes": true, 00:04:17.643 "flush": true, 00:04:17.643 "reset": true, 00:04:17.643 "compare": false, 00:04:17.643 "compare_and_write": false, 00:04:17.643 "abort": true, 00:04:17.643 "nvme_admin": false, 00:04:17.643 "nvme_io": false 00:04:17.643 }, 00:04:17.643 "memory_domains": [ 00:04:17.643 { 00:04:17.643 "dma_device_id": "system", 00:04:17.643 "dma_device_type": 1 00:04:17.643 }, 00:04:17.643 { 00:04:17.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.643 "dma_device_type": 2 00:04:17.643 } 00:04:17.643 ], 00:04:17.643 "driver_specific": {} 00:04:17.643 }, 00:04:17.643 { 00:04:17.643 "name": "Passthru0", 00:04:17.643 "aliases": [ 00:04:17.643 "16da2b19-6a3d-5e58-96fa-f8a3549c1de3" 00:04:17.643 ], 00:04:17.643 "product_name": "passthru", 00:04:17.643 "block_size": 512, 00:04:17.643 "num_blocks": 16384, 00:04:17.643 "uuid": "16da2b19-6a3d-5e58-96fa-f8a3549c1de3", 00:04:17.643 "assigned_rate_limits": { 00:04:17.643 "rw_ios_per_sec": 0, 00:04:17.643 "rw_mbytes_per_sec": 0, 00:04:17.643 "r_mbytes_per_sec": 0, 00:04:17.643 "w_mbytes_per_sec": 0 00:04:17.643 }, 00:04:17.643 "claimed": false, 00:04:17.643 "zoned": false, 00:04:17.643 "supported_io_types": { 00:04:17.643 "read": true, 00:04:17.643 "write": true, 00:04:17.643 "unmap": true, 00:04:17.643 "write_zeroes": true, 00:04:17.643 "flush": true, 00:04:17.643 "reset": true, 00:04:17.643 "compare": false, 00:04:17.643 "compare_and_write": false, 00:04:17.643 "abort": true, 00:04:17.643 "nvme_admin": false, 00:04:17.643 "nvme_io": false 00:04:17.643 }, 00:04:17.643 "memory_domains": [ 00:04:17.643 { 00:04:17.643 "dma_device_id": "system", 00:04:17.643 "dma_device_type": 1 00:04:17.643 }, 00:04:17.643 { 00:04:17.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.643 "dma_device_type": 2 00:04:17.643 } 00:04:17.643 ], 00:04:17.643 "driver_specific": { 00:04:17.643 "passthru": { 00:04:17.643 "name": "Passthru0", 00:04:17.643 "base_bdev_name": "Malloc0" 00:04:17.643 } 00:04:17.643 } 00:04:17.643 } 00:04:17.643 ]' 00:04:17.643 00:38:10 -- rpc/rpc.sh@21 -- # jq length 00:04:17.643 00:38:10 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.643 00:38:10 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.643 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.643 00:38:10 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:17.643 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.643 00:38:10 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.643 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.643 00:38:10 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.643 00:38:10 -- rpc/rpc.sh@26 -- # jq length 00:04:17.643 00:38:10 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.643 00:04:17.643 real 0m0.244s 00:04:17.643 user 0m0.137s 00:04:17.643 sys 0m0.032s 00:04:17.643 00:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.643 ************************************ 00:04:17.643 END TEST rpc_integrity 00:04:17.643 ************************************ 00:04:17.643 00:38:10 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:17.643 00:38:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.643 00:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.643 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.903 ************************************ 00:04:17.903 START TEST rpc_plugins 00:04:17.903 ************************************ 00:04:17.903 00:38:10 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:04:17.903 00:38:10 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:17.903 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.903 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.903 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.903 00:38:10 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:17.903 00:38:10 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:17.903 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.903 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.903 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.903 00:38:10 -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:17.903 { 00:04:17.903 "name": "Malloc1", 00:04:17.903 "aliases": [ 00:04:17.903 "7cb33b02-d096-49b9-9a05-0965b7d86e16" 00:04:17.903 ], 00:04:17.903 "product_name": "Malloc disk", 00:04:17.903 "block_size": 4096, 00:04:17.903 "num_blocks": 256, 00:04:17.903 "uuid": "7cb33b02-d096-49b9-9a05-0965b7d86e16", 00:04:17.903 "assigned_rate_limits": { 00:04:17.903 "rw_ios_per_sec": 0, 00:04:17.903 "rw_mbytes_per_sec": 0, 00:04:17.903 "r_mbytes_per_sec": 0, 00:04:17.903 "w_mbytes_per_sec": 0 00:04:17.903 }, 00:04:17.903 "claimed": false, 00:04:17.903 "zoned": false, 00:04:17.903 "supported_io_types": { 00:04:17.903 "read": true, 00:04:17.903 "write": true, 00:04:17.903 "unmap": true, 00:04:17.903 "write_zeroes": true, 00:04:17.903 "flush": true, 00:04:17.903 "reset": true, 00:04:17.903 "compare": false, 00:04:17.903 "compare_and_write": false, 00:04:17.903 "abort": true, 00:04:17.903 "nvme_admin": false, 00:04:17.903 "nvme_io": false 00:04:17.903 }, 00:04:17.903 "memory_domains": [ 00:04:17.903 { 00:04:17.903 "dma_device_id": "system", 00:04:17.903 "dma_device_type": 1 00:04:17.903 }, 00:04:17.903 { 00:04:17.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.903 "dma_device_type": 2 00:04:17.903 } 00:04:17.903 ], 00:04:17.903 "driver_specific": {} 00:04:17.903 } 00:04:17.903 ]' 00:04:17.903 00:38:10 -- rpc/rpc.sh@32 -- # jq length 00:04:17.903 00:38:10 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:17.903 00:38:10 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:17.903 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.903 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.903 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.903 00:38:10 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:17.903 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:17.903 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.903 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:17.903 00:38:10 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:17.903 00:38:10 -- rpc/rpc.sh@36 -- # jq length 00:04:17.903 00:38:10 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:17.903 00:04:17.903 real 0m0.117s 00:04:17.903 user 0m0.069s 00:04:17.903 sys 0m0.015s 00:04:17.904 00:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:17.904 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:17.904 ************************************ 00:04:17.904 END TEST rpc_plugins 00:04:17.904 ************************************ 00:04:17.904 00:38:10 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:17.904 00:38:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:17.904 00:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:17.904 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.161 ************************************ 00:04:18.161 START TEST rpc_trace_cmd_test 00:04:18.161 ************************************ 00:04:18.161 00:38:10 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:04:18.161 00:38:10 -- rpc/rpc.sh@40 -- # local info 00:04:18.161 00:38:10 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:18.161 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.161 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.161 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.161 00:38:10 -- rpc/rpc.sh@42 -- # info='{ 00:04:18.161 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2542064", 00:04:18.161 "tpoint_group_mask": "0x8", 00:04:18.161 "iscsi_conn": { 00:04:18.162 "mask": "0x2", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "scsi": { 00:04:18.162 "mask": "0x4", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "bdev": { 00:04:18.162 "mask": "0x8", 00:04:18.162 "tpoint_mask": "0xffffffffffffffff" 00:04:18.162 }, 00:04:18.162 "nvmf_rdma": { 00:04:18.162 "mask": "0x10", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "nvmf_tcp": { 00:04:18.162 "mask": "0x20", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "ftl": { 00:04:18.162 "mask": "0x40", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "blobfs": { 00:04:18.162 "mask": "0x80", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "dsa": { 00:04:18.162 "mask": "0x200", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "thread": { 00:04:18.162 "mask": "0x400", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "nvme_pcie": { 00:04:18.162 "mask": "0x800", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "iaa": { 00:04:18.162 "mask": "0x1000", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "nvme_tcp": { 00:04:18.162 "mask": "0x2000", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "bdev_nvme": { 00:04:18.162 "mask": "0x4000", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 }, 00:04:18.162 "sock": { 00:04:18.162 "mask": "0x8000", 00:04:18.162 "tpoint_mask": "0x0" 00:04:18.162 } 00:04:18.162 }' 00:04:18.162 00:38:10 -- rpc/rpc.sh@43 -- # jq length 00:04:18.162 00:38:10 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:18.162 00:38:10 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:18.162 00:38:10 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:18.162 00:38:10 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:18.162 00:38:10 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:18.162 00:38:10 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:18.162 00:38:10 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:18.162 00:38:10 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:18.420 00:38:10 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:18.420 00:04:18.420 real 0m0.189s 00:04:18.420 user 0m0.155s 00:04:18.420 sys 0m0.024s 00:04:18.420 00:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.420 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.420 ************************************ 00:04:18.420 END TEST rpc_trace_cmd_test 00:04:18.420 ************************************ 00:04:18.421 00:38:10 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:18.421 00:38:10 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:18.421 00:38:10 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:18.421 00:38:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:18.421 00:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:18.421 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 ************************************ 00:04:18.421 START TEST rpc_daemon_integrity 00:04:18.421 ************************************ 00:04:18.421 00:38:10 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:04:18.421 00:38:10 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:18.421 00:38:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.421 00:38:10 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 00:38:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.421 00:38:10 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:18.421 00:38:10 -- rpc/rpc.sh@13 -- # jq length 00:04:18.421 00:38:11 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:18.421 00:38:11 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:18.421 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.421 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.421 00:38:11 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:18.421 00:38:11 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:18.421 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.421 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.421 00:38:11 -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:18.421 { 00:04:18.421 "name": "Malloc2", 00:04:18.421 "aliases": [ 00:04:18.421 "ca25321e-6c93-447d-87be-cf9cb9950555" 00:04:18.421 ], 00:04:18.421 "product_name": "Malloc disk", 00:04:18.421 "block_size": 512, 00:04:18.421 "num_blocks": 16384, 00:04:18.421 "uuid": "ca25321e-6c93-447d-87be-cf9cb9950555", 00:04:18.421 "assigned_rate_limits": { 00:04:18.421 "rw_ios_per_sec": 0, 00:04:18.421 "rw_mbytes_per_sec": 0, 00:04:18.421 "r_mbytes_per_sec": 0, 00:04:18.421 "w_mbytes_per_sec": 0 00:04:18.421 }, 00:04:18.421 "claimed": false, 00:04:18.421 "zoned": false, 00:04:18.421 "supported_io_types": { 00:04:18.421 "read": true, 00:04:18.421 "write": true, 00:04:18.421 "unmap": true, 00:04:18.421 "write_zeroes": true, 00:04:18.421 "flush": true, 00:04:18.421 "reset": true, 00:04:18.421 "compare": false, 00:04:18.421 "compare_and_write": false, 00:04:18.421 "abort": true, 00:04:18.421 "nvme_admin": false, 00:04:18.421 "nvme_io": false 00:04:18.421 }, 00:04:18.421 "memory_domains": [ 00:04:18.421 { 00:04:18.421 "dma_device_id": "system", 00:04:18.421 "dma_device_type": 1 00:04:18.421 }, 00:04:18.421 { 00:04:18.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.421 "dma_device_type": 2 00:04:18.421 } 00:04:18.421 ], 00:04:18.421 "driver_specific": {} 00:04:18.421 } 00:04:18.421 ]' 00:04:18.421 00:38:11 -- rpc/rpc.sh@17 -- # jq length 00:04:18.421 00:38:11 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:18.421 00:38:11 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:18.421 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.421 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 [2024-04-27 00:38:11.086916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:18.421 [2024-04-27 00:38:11.086958] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:18.421 [2024-04-27 00:38:11.086984] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000021380 00:04:18.421 [2024-04-27 00:38:11.086993] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:18.421 [2024-04-27 00:38:11.088729] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:18.421 [2024-04-27 00:38:11.088755] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:18.421 Passthru0 00:04:18.421 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.421 00:38:11 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:18.421 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.421 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.421 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.421 00:38:11 -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:18.421 { 00:04:18.421 "name": "Malloc2", 00:04:18.421 "aliases": [ 00:04:18.421 "ca25321e-6c93-447d-87be-cf9cb9950555" 00:04:18.421 ], 00:04:18.421 "product_name": "Malloc disk", 00:04:18.421 "block_size": 512, 00:04:18.421 "num_blocks": 16384, 00:04:18.421 "uuid": "ca25321e-6c93-447d-87be-cf9cb9950555", 00:04:18.421 "assigned_rate_limits": { 00:04:18.421 "rw_ios_per_sec": 0, 00:04:18.421 "rw_mbytes_per_sec": 0, 00:04:18.421 "r_mbytes_per_sec": 0, 00:04:18.421 "w_mbytes_per_sec": 0 00:04:18.421 }, 00:04:18.421 "claimed": true, 00:04:18.421 "claim_type": "exclusive_write", 00:04:18.421 "zoned": false, 00:04:18.421 "supported_io_types": { 00:04:18.421 "read": true, 00:04:18.421 "write": true, 00:04:18.421 "unmap": true, 00:04:18.421 "write_zeroes": true, 00:04:18.421 "flush": true, 00:04:18.421 "reset": true, 00:04:18.421 "compare": false, 00:04:18.421 "compare_and_write": false, 00:04:18.421 "abort": true, 00:04:18.421 "nvme_admin": false, 00:04:18.421 "nvme_io": false 00:04:18.421 }, 00:04:18.421 "memory_domains": [ 00:04:18.421 { 00:04:18.421 "dma_device_id": "system", 00:04:18.421 "dma_device_type": 1 00:04:18.421 }, 00:04:18.421 { 00:04:18.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.421 "dma_device_type": 2 00:04:18.421 } 00:04:18.421 ], 00:04:18.421 "driver_specific": {} 00:04:18.421 }, 00:04:18.421 { 00:04:18.421 "name": "Passthru0", 00:04:18.421 "aliases": [ 00:04:18.421 "3b5af91c-d349-5ca4-8f0b-ca61492a8a45" 00:04:18.421 ], 00:04:18.421 "product_name": "passthru", 00:04:18.421 "block_size": 512, 00:04:18.421 "num_blocks": 16384, 00:04:18.421 "uuid": "3b5af91c-d349-5ca4-8f0b-ca61492a8a45", 00:04:18.421 "assigned_rate_limits": { 00:04:18.421 "rw_ios_per_sec": 0, 00:04:18.421 "rw_mbytes_per_sec": 0, 00:04:18.421 "r_mbytes_per_sec": 0, 00:04:18.421 "w_mbytes_per_sec": 0 00:04:18.421 }, 00:04:18.421 "claimed": false, 00:04:18.421 "zoned": false, 00:04:18.421 "supported_io_types": { 00:04:18.421 "read": true, 00:04:18.421 "write": true, 00:04:18.421 "unmap": true, 00:04:18.421 "write_zeroes": true, 00:04:18.421 "flush": true, 00:04:18.421 "reset": true, 00:04:18.421 "compare": false, 00:04:18.421 "compare_and_write": false, 00:04:18.421 "abort": true, 00:04:18.421 "nvme_admin": false, 00:04:18.421 "nvme_io": false 00:04:18.421 }, 00:04:18.421 "memory_domains": [ 00:04:18.421 { 00:04:18.421 "dma_device_id": "system", 00:04:18.421 "dma_device_type": 1 00:04:18.421 }, 00:04:18.421 { 00:04:18.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:18.421 "dma_device_type": 2 00:04:18.421 } 00:04:18.421 ], 00:04:18.421 "driver_specific": { 00:04:18.421 "passthru": { 00:04:18.421 "name": "Passthru0", 00:04:18.421 "base_bdev_name": "Malloc2" 00:04:18.421 } 00:04:18.421 } 00:04:18.421 } 00:04:18.421 ]' 00:04:18.421 00:38:11 -- rpc/rpc.sh@21 -- # jq length 00:04:18.682 00:38:11 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:18.682 00:38:11 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:18.682 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.682 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.682 00:38:11 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:18.682 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.682 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.682 00:38:11 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:18.682 00:38:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:18.682 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 00:38:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:18.682 00:38:11 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:18.682 00:38:11 -- rpc/rpc.sh@26 -- # jq length 00:04:18.682 00:38:11 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:18.682 00:04:18.682 real 0m0.239s 00:04:18.682 user 0m0.136s 00:04:18.682 sys 0m0.031s 00:04:18.682 00:38:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:18.682 00:38:11 -- common/autotest_common.sh@10 -- # set +x 00:04:18.682 ************************************ 00:04:18.682 END TEST rpc_daemon_integrity 00:04:18.682 ************************************ 00:04:18.682 00:38:11 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:18.682 00:38:11 -- rpc/rpc.sh@84 -- # killprocess 2542064 00:04:18.682 00:38:11 -- common/autotest_common.sh@936 -- # '[' -z 2542064 ']' 00:04:18.682 00:38:11 -- common/autotest_common.sh@940 -- # kill -0 2542064 00:04:18.682 00:38:11 -- common/autotest_common.sh@941 -- # uname 00:04:18.682 00:38:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:18.682 00:38:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2542064 00:04:18.682 00:38:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:18.682 00:38:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:18.682 00:38:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2542064' 00:04:18.682 killing process with pid 2542064 00:04:18.682 00:38:11 -- common/autotest_common.sh@955 -- # kill 2542064 00:04:18.682 00:38:11 -- common/autotest_common.sh@960 -- # wait 2542064 00:04:19.620 00:04:19.620 real 0m3.096s 00:04:19.620 user 0m3.694s 00:04:19.620 sys 0m0.828s 00:04:19.620 00:38:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:19.620 00:38:12 -- common/autotest_common.sh@10 -- # set +x 00:04:19.620 ************************************ 00:04:19.620 END TEST rpc 00:04:19.620 ************************************ 00:04:19.620 00:38:12 -- spdk/autotest.sh@166 -- # run_test skip_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:19.620 00:38:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.620 00:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.620 00:38:12 -- common/autotest_common.sh@10 -- # set +x 00:04:19.620 ************************************ 00:04:19.620 START TEST skip_rpc 00:04:19.620 ************************************ 00:04:19.620 00:38:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:19.881 * Looking for test storage... 00:04:19.881 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:19.881 00:38:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:19.881 00:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:19.881 00:38:12 -- common/autotest_common.sh@10 -- # set +x 00:04:19.881 ************************************ 00:04:19.881 START TEST skip_rpc 00:04:19.881 ************************************ 00:04:19.881 00:38:12 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2543002 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:19.881 00:38:12 -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:19.881 [2024-04-27 00:38:12.517387] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:19.881 [2024-04-27 00:38:12.517501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2543002 ] 00:04:20.141 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.141 [2024-04-27 00:38:12.639194] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:20.141 [2024-04-27 00:38:12.730150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.466 00:38:17 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:25.466 00:38:17 -- common/autotest_common.sh@638 -- # local es=0 00:04:25.466 00:38:17 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:25.466 00:38:17 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:04:25.466 00:38:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.466 00:38:17 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:04:25.466 00:38:17 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:25.466 00:38:17 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:04:25.466 00:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:25.466 00:38:17 -- common/autotest_common.sh@10 -- # set +x 00:04:25.466 00:38:17 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:25.466 00:38:17 -- common/autotest_common.sh@641 -- # es=1 00:04:25.466 00:38:17 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:25.466 00:38:17 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:25.466 00:38:17 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:25.466 00:38:17 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:25.466 00:38:17 -- rpc/skip_rpc.sh@23 -- # killprocess 2543002 00:04:25.466 00:38:17 -- common/autotest_common.sh@936 -- # '[' -z 2543002 ']' 00:04:25.466 00:38:17 -- common/autotest_common.sh@940 -- # kill -0 2543002 00:04:25.466 00:38:17 -- common/autotest_common.sh@941 -- # uname 00:04:25.466 00:38:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:25.466 00:38:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2543002 00:04:25.466 00:38:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:25.466 00:38:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:25.466 00:38:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2543002' 00:04:25.466 killing process with pid 2543002 00:04:25.466 00:38:17 -- common/autotest_common.sh@955 -- # kill 2543002 00:04:25.466 00:38:17 -- common/autotest_common.sh@960 -- # wait 2543002 00:04:25.726 00:04:25.726 real 0m5.900s 00:04:25.726 user 0m5.574s 00:04:25.726 sys 0m0.338s 00:04:25.726 00:38:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:25.726 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.726 ************************************ 00:04:25.726 END TEST skip_rpc 00:04:25.726 ************************************ 00:04:25.726 00:38:18 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:25.726 00:38:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:25.726 00:38:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:25.726 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.984 ************************************ 00:04:25.984 START TEST skip_rpc_with_json 00:04:25.984 ************************************ 00:04:25.984 00:38:18 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:04:25.984 00:38:18 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:25.984 00:38:18 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2544242 00:04:25.984 00:38:18 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:25.984 00:38:18 -- rpc/skip_rpc.sh@31 -- # waitforlisten 2544242 00:04:25.984 00:38:18 -- common/autotest_common.sh@817 -- # '[' -z 2544242 ']' 00:04:25.984 00:38:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.984 00:38:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:25.984 00:38:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.985 00:38:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:25.985 00:38:18 -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.985 00:38:18 -- common/autotest_common.sh@10 -- # set +x 00:04:25.985 [2024-04-27 00:38:18.543500] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:25.985 [2024-04-27 00:38:18.543605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2544242 ] 00:04:25.985 EAL: No free 2048 kB hugepages reported on node 1 00:04:25.985 [2024-04-27 00:38:18.661103] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.243 [2024-04-27 00:38:18.752478] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.814 00:38:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:26.814 00:38:19 -- common/autotest_common.sh@850 -- # return 0 00:04:26.814 00:38:19 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:26.814 00:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.814 00:38:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.814 [2024-04-27 00:38:19.237998] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:26.814 request: 00:04:26.814 { 00:04:26.814 "trtype": "tcp", 00:04:26.814 "method": "nvmf_get_transports", 00:04:26.814 "req_id": 1 00:04:26.814 } 00:04:26.814 Got JSON-RPC error response 00:04:26.814 response: 00:04:26.814 { 00:04:26.814 "code": -19, 00:04:26.814 "message": "No such device" 00:04:26.814 } 00:04:26.814 00:38:19 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:04:26.814 00:38:19 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:26.814 00:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.814 00:38:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.814 [2024-04-27 00:38:19.246107] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:26.814 00:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:26.814 00:38:19 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:26.814 00:38:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:04:26.814 00:38:19 -- common/autotest_common.sh@10 -- # set +x 00:04:26.814 00:38:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:04:26.814 00:38:19 -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:26.814 { 00:04:26.814 "subsystems": [ 00:04:26.814 { 00:04:26.814 "subsystem": "keyring", 00:04:26.814 "config": [] 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "subsystem": "iobuf", 00:04:26.814 "config": [ 00:04:26.814 { 00:04:26.814 "method": "iobuf_set_options", 00:04:26.814 "params": { 00:04:26.814 "small_pool_count": 8192, 00:04:26.814 "large_pool_count": 1024, 00:04:26.814 "small_bufsize": 8192, 00:04:26.814 "large_bufsize": 135168 00:04:26.814 } 00:04:26.814 } 00:04:26.814 ] 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "subsystem": "sock", 00:04:26.814 "config": [ 00:04:26.814 { 00:04:26.814 "method": "sock_impl_set_options", 00:04:26.814 "params": { 00:04:26.814 "impl_name": "posix", 00:04:26.814 "recv_buf_size": 2097152, 00:04:26.814 "send_buf_size": 2097152, 00:04:26.814 "enable_recv_pipe": true, 00:04:26.814 "enable_quickack": false, 00:04:26.814 "enable_placement_id": 0, 00:04:26.814 "enable_zerocopy_send_server": true, 00:04:26.814 "enable_zerocopy_send_client": false, 00:04:26.814 "zerocopy_threshold": 0, 00:04:26.814 "tls_version": 0, 00:04:26.814 "enable_ktls": false 00:04:26.814 } 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "method": "sock_impl_set_options", 00:04:26.814 "params": { 00:04:26.814 "impl_name": "ssl", 00:04:26.814 "recv_buf_size": 4096, 00:04:26.814 "send_buf_size": 4096, 00:04:26.814 "enable_recv_pipe": true, 00:04:26.814 "enable_quickack": false, 00:04:26.814 "enable_placement_id": 0, 00:04:26.814 "enable_zerocopy_send_server": true, 00:04:26.814 "enable_zerocopy_send_client": false, 00:04:26.814 "zerocopy_threshold": 0, 00:04:26.814 "tls_version": 0, 00:04:26.814 "enable_ktls": false 00:04:26.814 } 00:04:26.814 } 00:04:26.814 ] 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "subsystem": "vmd", 00:04:26.814 "config": [] 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "subsystem": "accel", 00:04:26.814 "config": [ 00:04:26.814 { 00:04:26.814 "method": "accel_set_options", 00:04:26.814 "params": { 00:04:26.814 "small_cache_size": 128, 00:04:26.814 "large_cache_size": 16, 00:04:26.814 "task_count": 2048, 00:04:26.814 "sequence_count": 2048, 00:04:26.814 "buf_count": 2048 00:04:26.814 } 00:04:26.814 } 00:04:26.814 ] 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "subsystem": "bdev", 00:04:26.814 "config": [ 00:04:26.814 { 00:04:26.814 "method": "bdev_set_options", 00:04:26.814 "params": { 00:04:26.814 "bdev_io_pool_size": 65535, 00:04:26.814 "bdev_io_cache_size": 256, 00:04:26.814 "bdev_auto_examine": true, 00:04:26.814 "iobuf_small_cache_size": 128, 00:04:26.814 "iobuf_large_cache_size": 16 00:04:26.814 } 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "method": "bdev_raid_set_options", 00:04:26.814 "params": { 00:04:26.814 "process_window_size_kb": 1024 00:04:26.814 } 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "method": "bdev_iscsi_set_options", 00:04:26.814 "params": { 00:04:26.814 "timeout_sec": 30 00:04:26.814 } 00:04:26.814 }, 00:04:26.814 { 00:04:26.814 "method": "bdev_nvme_set_options", 00:04:26.814 "params": { 00:04:26.814 "action_on_timeout": "none", 00:04:26.814 "timeout_us": 0, 00:04:26.814 "timeout_admin_us": 0, 00:04:26.814 "keep_alive_timeout_ms": 10000, 00:04:26.814 "arbitration_burst": 0, 00:04:26.814 "low_priority_weight": 0, 00:04:26.814 "medium_priority_weight": 0, 00:04:26.814 "high_priority_weight": 0, 00:04:26.814 "nvme_adminq_poll_period_us": 10000, 00:04:26.814 "nvme_ioq_poll_period_us": 0, 00:04:26.814 "io_queue_requests": 0, 00:04:26.814 "delay_cmd_submit": true, 00:04:26.814 "transport_retry_count": 4, 00:04:26.814 "bdev_retry_count": 3, 00:04:26.814 "transport_ack_timeout": 0, 00:04:26.815 "ctrlr_loss_timeout_sec": 0, 00:04:26.815 "reconnect_delay_sec": 0, 00:04:26.815 "fast_io_fail_timeout_sec": 0, 00:04:26.815 "disable_auto_failback": false, 00:04:26.815 "generate_uuids": false, 00:04:26.815 "transport_tos": 0, 00:04:26.815 "nvme_error_stat": false, 00:04:26.815 "rdma_srq_size": 0, 00:04:26.815 "io_path_stat": false, 00:04:26.815 "allow_accel_sequence": false, 00:04:26.815 "rdma_max_cq_size": 0, 00:04:26.815 "rdma_cm_event_timeout_ms": 0, 00:04:26.815 "dhchap_digests": [ 00:04:26.815 "sha256", 00:04:26.815 "sha384", 00:04:26.815 "sha512" 00:04:26.815 ], 00:04:26.815 "dhchap_dhgroups": [ 00:04:26.815 "null", 00:04:26.815 "ffdhe2048", 00:04:26.815 "ffdhe3072", 00:04:26.815 "ffdhe4096", 00:04:26.815 "ffdhe6144", 00:04:26.815 "ffdhe8192" 00:04:26.815 ] 00:04:26.815 } 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "method": "bdev_nvme_set_hotplug", 00:04:26.815 "params": { 00:04:26.815 "period_us": 100000, 00:04:26.815 "enable": false 00:04:26.815 } 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "method": "bdev_wait_for_examine" 00:04:26.815 } 00:04:26.815 ] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "scsi", 00:04:26.815 "config": null 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "scheduler", 00:04:26.815 "config": [ 00:04:26.815 { 00:04:26.815 "method": "framework_set_scheduler", 00:04:26.815 "params": { 00:04:26.815 "name": "static" 00:04:26.815 } 00:04:26.815 } 00:04:26.815 ] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "vhost_scsi", 00:04:26.815 "config": [] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "vhost_blk", 00:04:26.815 "config": [] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "ublk", 00:04:26.815 "config": [] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "nbd", 00:04:26.815 "config": [] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "nvmf", 00:04:26.815 "config": [ 00:04:26.815 { 00:04:26.815 "method": "nvmf_set_config", 00:04:26.815 "params": { 00:04:26.815 "discovery_filter": "match_any", 00:04:26.815 "admin_cmd_passthru": { 00:04:26.815 "identify_ctrlr": false 00:04:26.815 } 00:04:26.815 } 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "method": "nvmf_set_max_subsystems", 00:04:26.815 "params": { 00:04:26.815 "max_subsystems": 1024 00:04:26.815 } 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "method": "nvmf_set_crdt", 00:04:26.815 "params": { 00:04:26.815 "crdt1": 0, 00:04:26.815 "crdt2": 0, 00:04:26.815 "crdt3": 0 00:04:26.815 } 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "method": "nvmf_create_transport", 00:04:26.815 "params": { 00:04:26.815 "trtype": "TCP", 00:04:26.815 "max_queue_depth": 128, 00:04:26.815 "max_io_qpairs_per_ctrlr": 127, 00:04:26.815 "in_capsule_data_size": 4096, 00:04:26.815 "max_io_size": 131072, 00:04:26.815 "io_unit_size": 131072, 00:04:26.815 "max_aq_depth": 128, 00:04:26.815 "num_shared_buffers": 511, 00:04:26.815 "buf_cache_size": 4294967295, 00:04:26.815 "dif_insert_or_strip": false, 00:04:26.815 "zcopy": false, 00:04:26.815 "c2h_success": true, 00:04:26.815 "sock_priority": 0, 00:04:26.815 "abort_timeout_sec": 1, 00:04:26.815 "ack_timeout": 0, 00:04:26.815 "data_wr_pool_size": 0 00:04:26.815 } 00:04:26.815 } 00:04:26.815 ] 00:04:26.815 }, 00:04:26.815 { 00:04:26.815 "subsystem": "iscsi", 00:04:26.815 "config": [ 00:04:26.815 { 00:04:26.815 "method": "iscsi_set_options", 00:04:26.815 "params": { 00:04:26.815 "node_base": "iqn.2016-06.io.spdk", 00:04:26.815 "max_sessions": 128, 00:04:26.815 "max_connections_per_session": 2, 00:04:26.815 "max_queue_depth": 64, 00:04:26.815 "default_time2wait": 2, 00:04:26.815 "default_time2retain": 20, 00:04:26.815 "first_burst_length": 8192, 00:04:26.815 "immediate_data": true, 00:04:26.815 "allow_duplicated_isid": false, 00:04:26.815 "error_recovery_level": 0, 00:04:26.815 "nop_timeout": 60, 00:04:26.815 "nop_in_interval": 30, 00:04:26.815 "disable_chap": false, 00:04:26.815 "require_chap": false, 00:04:26.815 "mutual_chap": false, 00:04:26.815 "chap_group": 0, 00:04:26.815 "max_large_datain_per_connection": 64, 00:04:26.815 "max_r2t_per_connection": 4, 00:04:26.815 "pdu_pool_size": 36864, 00:04:26.815 "immediate_data_pool_size": 16384, 00:04:26.815 "data_out_pool_size": 2048 00:04:26.815 } 00:04:26.815 } 00:04:26.815 ] 00:04:26.815 } 00:04:26.815 ] 00:04:26.815 } 00:04:26.815 00:38:19 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:26.815 00:38:19 -- rpc/skip_rpc.sh@40 -- # killprocess 2544242 00:04:26.815 00:38:19 -- common/autotest_common.sh@936 -- # '[' -z 2544242 ']' 00:04:26.815 00:38:19 -- common/autotest_common.sh@940 -- # kill -0 2544242 00:04:26.815 00:38:19 -- common/autotest_common.sh@941 -- # uname 00:04:26.815 00:38:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:26.815 00:38:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2544242 00:04:26.815 00:38:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:26.815 00:38:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:26.815 00:38:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2544242' 00:04:26.815 killing process with pid 2544242 00:04:26.815 00:38:19 -- common/autotest_common.sh@955 -- # kill 2544242 00:04:26.815 00:38:19 -- common/autotest_common.sh@960 -- # wait 2544242 00:04:27.756 00:38:20 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2544700 00:04:27.756 00:38:20 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:27.756 00:38:20 -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:33.028 00:38:25 -- rpc/skip_rpc.sh@50 -- # killprocess 2544700 00:04:33.028 00:38:25 -- common/autotest_common.sh@936 -- # '[' -z 2544700 ']' 00:04:33.028 00:38:25 -- common/autotest_common.sh@940 -- # kill -0 2544700 00:04:33.028 00:38:25 -- common/autotest_common.sh@941 -- # uname 00:04:33.028 00:38:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:33.028 00:38:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2544700 00:04:33.028 00:38:25 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:33.028 00:38:25 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:33.028 00:38:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2544700' 00:04:33.028 killing process with pid 2544700 00:04:33.028 00:38:25 -- common/autotest_common.sh@955 -- # kill 2544700 00:04:33.028 00:38:25 -- common/autotest_common.sh@960 -- # wait 2544700 00:04:33.594 00:38:26 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:33.594 00:38:26 -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/log.txt 00:04:33.594 00:04:33.594 real 0m7.700s 00:04:33.594 user 0m7.339s 00:04:33.594 sys 0m0.653s 00:04:33.594 00:38:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:33.594 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:33.594 ************************************ 00:04:33.594 END TEST skip_rpc_with_json 00:04:33.594 ************************************ 00:04:33.594 00:38:26 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:33.594 00:38:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.594 00:38:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.594 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:33.594 ************************************ 00:04:33.594 START TEST skip_rpc_with_delay 00:04:33.594 ************************************ 00:04:33.594 00:38:26 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:04:33.594 00:38:26 -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.594 00:38:26 -- common/autotest_common.sh@638 -- # local es=0 00:04:33.594 00:38:26 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.594 00:38:26 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.594 00:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:33.594 00:38:26 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.594 00:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:33.594 00:38:26 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.594 00:38:26 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:33.594 00:38:26 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:33.594 00:38:26 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:33.594 00:38:26 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.854 [2024-04-27 00:38:26.361573] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:33.854 [2024-04-27 00:38:26.361694] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:33.854 00:38:26 -- common/autotest_common.sh@641 -- # es=1 00:04:33.854 00:38:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:33.854 00:38:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:04:33.854 00:38:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:33.854 00:04:33.854 real 0m0.124s 00:04:33.854 user 0m0.066s 00:04:33.854 sys 0m0.057s 00:04:33.854 00:38:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:33.854 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:33.854 ************************************ 00:04:33.854 END TEST skip_rpc_with_delay 00:04:33.854 ************************************ 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@77 -- # uname 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:33.854 00:38:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:33.854 00:38:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:33.854 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:33.854 ************************************ 00:04:33.854 START TEST exit_on_failed_rpc_init 00:04:33.854 ************************************ 00:04:33.854 00:38:26 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2545959 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@63 -- # waitforlisten 2545959 00:04:33.854 00:38:26 -- common/autotest_common.sh@817 -- # '[' -z 2545959 ']' 00:04:33.854 00:38:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.854 00:38:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:33.854 00:38:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.854 00:38:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:33.854 00:38:26 -- common/autotest_common.sh@10 -- # set +x 00:04:33.854 00:38:26 -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:34.113 [2024-04-27 00:38:26.599805] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:34.113 [2024-04-27 00:38:26.599907] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545959 ] 00:04:34.113 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.113 [2024-04-27 00:38:26.714781] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.113 [2024-04-27 00:38:26.806937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.681 00:38:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:34.681 00:38:27 -- common/autotest_common.sh@850 -- # return 0 00:04:34.681 00:38:27 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.681 00:38:27 -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.681 00:38:27 -- common/autotest_common.sh@638 -- # local es=0 00:04:34.681 00:38:27 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.681 00:38:27 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.681 00:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.681 00:38:27 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.681 00:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.681 00:38:27 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.681 00:38:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:04:34.681 00:38:27 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:04:34.681 00:38:27 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:34.681 00:38:27 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.681 [2024-04-27 00:38:27.348379] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:34.681 [2024-04-27 00:38:27.348453] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2545977 ] 00:04:34.940 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.940 [2024-04-27 00:38:27.460608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.940 [2024-04-27 00:38:27.599808] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.940 [2024-04-27 00:38:27.599914] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.940 [2024-04-27 00:38:27.599941] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.940 [2024-04-27 00:38:27.599959] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:35.201 00:38:27 -- common/autotest_common.sh@641 -- # es=234 00:04:35.201 00:38:27 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:04:35.201 00:38:27 -- common/autotest_common.sh@650 -- # es=106 00:04:35.201 00:38:27 -- common/autotest_common.sh@651 -- # case "$es" in 00:04:35.201 00:38:27 -- common/autotest_common.sh@658 -- # es=1 00:04:35.201 00:38:27 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:04:35.201 00:38:27 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:35.201 00:38:27 -- rpc/skip_rpc.sh@70 -- # killprocess 2545959 00:04:35.201 00:38:27 -- common/autotest_common.sh@936 -- # '[' -z 2545959 ']' 00:04:35.201 00:38:27 -- common/autotest_common.sh@940 -- # kill -0 2545959 00:04:35.201 00:38:27 -- common/autotest_common.sh@941 -- # uname 00:04:35.201 00:38:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:04:35.201 00:38:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2545959 00:04:35.461 00:38:27 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:04:35.461 00:38:27 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:04:35.461 00:38:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2545959' 00:04:35.461 killing process with pid 2545959 00:04:35.461 00:38:27 -- common/autotest_common.sh@955 -- # kill 2545959 00:04:35.461 00:38:27 -- common/autotest_common.sh@960 -- # wait 2545959 00:04:36.117 00:04:36.117 real 0m2.225s 00:04:36.117 user 0m2.576s 00:04:36.117 sys 0m0.500s 00:04:36.117 00:38:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.117 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.117 ************************************ 00:04:36.117 END TEST exit_on_failed_rpc_init 00:04:36.117 ************************************ 00:04:36.117 00:38:28 -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc/config.json 00:04:36.117 00:04:36.117 real 0m16.516s 00:04:36.117 user 0m15.735s 00:04:36.117 sys 0m1.910s 00:04:36.117 00:38:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.117 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.117 ************************************ 00:04:36.117 END TEST skip_rpc 00:04:36.117 ************************************ 00:04:36.117 00:38:28 -- spdk/autotest.sh@167 -- # run_test rpc_client /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.117 00:38:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.117 00:38:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.117 00:38:28 -- common/autotest_common.sh@10 -- # set +x 00:04:36.379 ************************************ 00:04:36.379 START TEST rpc_client 00:04:36.379 ************************************ 00:04:36.379 00:38:28 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:36.379 * Looking for test storage... 00:04:36.379 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client 00:04:36.379 00:38:28 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:36.379 OK 00:04:36.379 00:38:29 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.379 00:04:36.379 real 0m0.124s 00:04:36.379 user 0m0.050s 00:04:36.379 sys 0m0.080s 00:04:36.379 00:38:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:04:36.379 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.379 ************************************ 00:04:36.379 END TEST rpc_client 00:04:36.379 ************************************ 00:04:36.379 00:38:29 -- spdk/autotest.sh@168 -- # run_test json_config /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.379 00:38:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:36.379 00:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:36.379 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.638 ************************************ 00:04:36.638 START TEST json_config 00:04:36.638 ************************************ 00:04:36.638 00:38:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config.sh 00:04:36.638 00:38:29 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:04:36.638 00:38:29 -- nvmf/common.sh@7 -- # uname -s 00:04:36.638 00:38:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.638 00:38:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.638 00:38:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.638 00:38:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.638 00:38:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.638 00:38:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.638 00:38:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.638 00:38:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.638 00:38:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.638 00:38:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.638 00:38:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:04:36.638 00:38:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:04:36.638 00:38:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.638 00:38:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.638 00:38:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.638 00:38:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.638 00:38:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:04:36.638 00:38:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.638 00:38:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.638 00:38:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.638 00:38:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.638 00:38:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.638 00:38:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.638 00:38:29 -- paths/export.sh@5 -- # export PATH 00:04:36.638 00:38:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.638 00:38:29 -- nvmf/common.sh@47 -- # : 0 00:04:36.638 00:38:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:36.638 00:38:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:36.638 00:38:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.638 00:38:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.639 00:38:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.639 00:38:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:36.639 00:38:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:36.639 00:38:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:36.639 00:38:29 -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:04:36.639 00:38:29 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.639 00:38:29 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.639 00:38:29 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.639 00:38:29 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.639 00:38:29 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:36.639 00:38:29 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:36.639 00:38:29 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:36.639 00:38:29 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:36.639 00:38:29 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:36.639 00:38:29 -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:36.639 00:38:29 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json') 00:04:36.639 00:38:29 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:36.639 00:38:29 -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:36.639 00:38:29 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.639 00:38:29 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:36.639 INFO: JSON configuration test init 00:04:36.639 00:38:29 -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:36.639 00:38:29 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:36.639 00:38:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.639 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.639 00:38:29 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:36.639 00:38:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:36.639 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.639 00:38:29 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:36.639 00:38:29 -- json_config/common.sh@9 -- # local app=target 00:04:36.639 00:38:29 -- json_config/common.sh@10 -- # shift 00:04:36.639 00:38:29 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.639 00:38:29 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.639 00:38:29 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.639 00:38:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.639 00:38:29 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.639 00:38:29 -- json_config/common.sh@22 -- # app_pid["$app"]=2546603 00:04:36.639 00:38:29 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.639 Waiting for target to run... 00:04:36.639 00:38:29 -- json_config/common.sh@25 -- # waitforlisten 2546603 /var/tmp/spdk_tgt.sock 00:04:36.639 00:38:29 -- common/autotest_common.sh@817 -- # '[' -z 2546603 ']' 00:04:36.639 00:38:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.639 00:38:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:36.639 00:38:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.639 00:38:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:36.639 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:36.639 00:38:29 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:36.639 [2024-04-27 00:38:29.306585] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:36.639 [2024-04-27 00:38:29.306706] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2546603 ] 00:04:36.899 EAL: No free 2048 kB hugepages reported on node 1 00:04:36.899 [2024-04-27 00:38:29.593526] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.158 [2024-04-27 00:38:29.672079] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.417 00:38:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:04:37.417 00:38:29 -- common/autotest_common.sh@850 -- # return 0 00:04:37.417 00:38:29 -- json_config/common.sh@26 -- # echo '' 00:04:37.417 00:04:37.417 00:38:29 -- json_config/json_config.sh@269 -- # create_accel_config 00:04:37.417 00:38:29 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:37.417 00:38:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:37.417 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.417 00:38:29 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:37.417 00:38:29 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:37.417 00:38:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:37.417 00:38:29 -- common/autotest_common.sh@10 -- # set +x 00:04:37.417 00:38:30 -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:37.417 00:38:30 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:37.417 00:38:30 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:47.405 00:38:38 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:47.405 00:38:38 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:47.405 00:38:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.405 00:38:38 -- common/autotest_common.sh@10 -- # set +x 00:04:47.405 00:38:38 -- json_config/json_config.sh@45 -- # local ret=0 00:04:47.405 00:38:38 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:47.405 00:38:38 -- json_config/json_config.sh@46 -- # local enabled_types 00:04:47.405 00:38:38 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:47.405 00:38:38 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:47.405 00:38:38 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:47.405 00:38:39 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:47.405 00:38:39 -- json_config/json_config.sh@48 -- # local get_types 00:04:47.405 00:38:39 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:47.405 00:38:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.405 00:38:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.405 00:38:39 -- json_config/json_config.sh@55 -- # return 0 00:04:47.405 00:38:39 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:04:47.405 00:38:39 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:04:47.405 00:38:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:04:47.405 00:38:39 -- common/autotest_common.sh@10 -- # set +x 00:04:47.405 00:38:39 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:47.405 00:38:39 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:04:47.405 00:38:39 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:47.405 MallocForNvmf0 00:04:47.405 00:38:39 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:47.405 MallocForNvmf1 00:04:47.405 00:38:39 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:47.405 [2024-04-27 00:38:39.530569] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:47.405 00:38:39 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:47.405 00:38:39 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:47.405 00:38:39 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:47.405 00:38:39 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.405 00:38:39 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:47.665 [2024-04-27 00:38:40.119075] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:47.665 00:38:40 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:04:47.665 00:38:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.665 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.665 00:38:40 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:47.665 00:38:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.665 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.665 00:38:40 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:47.665 00:38:40 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.665 00:38:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:47.665 MallocBdevForConfigChangeCheck 00:04:47.665 00:38:40 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:47.665 00:38:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:04:47.665 00:38:40 -- common/autotest_common.sh@10 -- # set +x 00:04:47.924 00:38:40 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:47.924 00:38:40 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:47.924 00:38:40 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:47.924 INFO: shutting down applications... 00:04:47.924 00:38:40 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:47.924 00:38:40 -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:47.924 00:38:40 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:47.924 00:38:40 -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:56.045 Calling clear_iscsi_subsystem 00:04:56.045 Calling clear_nvmf_subsystem 00:04:56.045 Calling clear_nbd_subsystem 00:04:56.046 Calling clear_ublk_subsystem 00:04:56.046 Calling clear_vhost_blk_subsystem 00:04:56.046 Calling clear_vhost_scsi_subsystem 00:04:56.046 Calling clear_bdev_subsystem 00:04:56.046 00:38:47 -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py 00:04:56.046 00:38:47 -- json_config/json_config.sh@343 -- # count=100 00:04:56.046 00:38:47 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:56.046 00:38:47 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:56.046 00:38:47 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:56.046 00:38:47 -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:56.046 00:38:47 -- json_config/json_config.sh@345 -- # break 00:04:56.046 00:38:47 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:56.046 00:38:47 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:56.046 00:38:47 -- json_config/common.sh@31 -- # local app=target 00:04:56.046 00:38:47 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:56.046 00:38:47 -- json_config/common.sh@35 -- # [[ -n 2546603 ]] 00:04:56.046 00:38:47 -- json_config/common.sh@38 -- # kill -SIGINT 2546603 00:04:56.046 00:38:47 -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:56.046 00:38:47 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.046 00:38:47 -- json_config/common.sh@41 -- # kill -0 2546603 00:04:56.046 00:38:47 -- json_config/common.sh@45 -- # sleep 0.5 00:04:56.046 00:38:48 -- json_config/common.sh@40 -- # (( i++ )) 00:04:56.046 00:38:48 -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:56.046 00:38:48 -- json_config/common.sh@41 -- # kill -0 2546603 00:04:56.046 00:38:48 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:56.046 00:38:48 -- json_config/common.sh@43 -- # break 00:04:56.046 00:38:48 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:56.046 00:38:48 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:56.046 SPDK target shutdown done 00:04:56.046 00:38:48 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:56.046 INFO: relaunching applications... 00:04:56.046 00:38:48 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.046 00:38:48 -- json_config/common.sh@9 -- # local app=target 00:04:56.046 00:38:48 -- json_config/common.sh@10 -- # shift 00:04:56.046 00:38:48 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.046 00:38:48 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.046 00:38:48 -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.046 00:38:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.046 00:38:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.046 00:38:48 -- json_config/common.sh@22 -- # app_pid["$app"]=2550488 00:04:56.046 00:38:48 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.046 Waiting for target to run... 00:04:56.046 00:38:48 -- json_config/common.sh@25 -- # waitforlisten 2550488 /var/tmp/spdk_tgt.sock 00:04:56.046 00:38:48 -- common/autotest_common.sh@817 -- # '[' -z 2550488 ']' 00:04:56.046 00:38:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.046 00:38:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:04:56.046 00:38:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.046 00:38:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:04:56.046 00:38:48 -- common/autotest_common.sh@10 -- # set +x 00:04:56.046 00:38:48 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:04:56.046 [2024-04-27 00:38:48.521127] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:04:56.046 [2024-04-27 00:38:48.521261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550488 ] 00:04:56.046 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.305 [2024-04-27 00:38:48.845002] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.305 [2024-04-27 00:38:48.923866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.295 [2024-04-27 00:38:57.808871] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:06.295 [2024-04-27 00:38:57.841120] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:06.295 00:38:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:06.295 00:38:57 -- common/autotest_common.sh@850 -- # return 0 00:05:06.295 00:38:57 -- json_config/common.sh@26 -- # echo '' 00:05:06.295 00:05:06.295 00:38:57 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:06.295 00:38:57 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:06.295 INFO: Checking if target configuration is the same... 00:05:06.295 00:38:57 -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.295 00:38:57 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:06.295 00:38:57 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.295 + '[' 2 -ne 2 ']' 00:05:06.295 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.295 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:05:06.295 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:05:06.295 +++ basename /dev/fd/62 00:05:06.295 ++ mktemp /tmp/62.XXX 00:05:06.295 + tmp_file_1=/tmp/62.YkT 00:05:06.295 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.295 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.295 + tmp_file_2=/tmp/spdk_tgt_config.json.Xpl 00:05:06.295 + ret=0 00:05:06.296 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.296 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.296 + diff -u /tmp/62.YkT /tmp/spdk_tgt_config.json.Xpl 00:05:06.296 + echo 'INFO: JSON config files are the same' 00:05:06.296 INFO: JSON config files are the same 00:05:06.296 + rm /tmp/62.YkT /tmp/spdk_tgt_config.json.Xpl 00:05:06.296 + exit 0 00:05:06.296 00:38:58 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:06.296 INFO: changing configuration and checking if this can be detected... 00:05:06.296 00:38:58 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.296 00:38:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:06.296 00:38:58 -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.296 00:38:58 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:06.296 00:38:58 -- json_config/common.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:06.296 + '[' 2 -ne 2 ']' 00:05:06.296 +++ dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:06.296 ++ readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/../.. 00:05:06.296 + rootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:05:06.296 +++ basename /dev/fd/62 00:05:06.296 ++ mktemp /tmp/62.XXX 00:05:06.296 + tmp_file_1=/tmp/62.WlM 00:05:06.296 +++ basename /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:06.296 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:06.296 + tmp_file_2=/tmp/spdk_tgt_config.json.3wK 00:05:06.296 + ret=0 00:05:06.296 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.296 + /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:06.296 + diff -u /tmp/62.WlM /tmp/spdk_tgt_config.json.3wK 00:05:06.296 + ret=1 00:05:06.296 + echo '=== Start of file: /tmp/62.WlM ===' 00:05:06.296 + cat /tmp/62.WlM 00:05:06.296 + echo '=== End of file: /tmp/62.WlM ===' 00:05:06.296 + echo '' 00:05:06.296 + echo '=== Start of file: /tmp/spdk_tgt_config.json.3wK ===' 00:05:06.296 + cat /tmp/spdk_tgt_config.json.3wK 00:05:06.296 + echo '=== End of file: /tmp/spdk_tgt_config.json.3wK ===' 00:05:06.296 + echo '' 00:05:06.296 + rm /tmp/62.WlM /tmp/spdk_tgt_config.json.3wK 00:05:06.296 + exit 1 00:05:06.296 00:38:58 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:06.296 INFO: configuration change detected. 00:05:06.296 00:38:58 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:06.296 00:38:58 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:06.296 00:38:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.296 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 00:38:58 -- json_config/json_config.sh@307 -- # local ret=0 00:05:06.296 00:38:58 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@317 -- # [[ -n 2550488 ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:06.296 00:38:58 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:06.296 00:38:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:06.296 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 00:38:58 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@193 -- # uname -s 00:05:06.296 00:38:58 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:06.296 00:38:58 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:06.296 00:38:58 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:06.296 00:38:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:06.296 00:38:58 -- common/autotest_common.sh@10 -- # set +x 00:05:06.296 00:38:58 -- json_config/json_config.sh@323 -- # killprocess 2550488 00:05:06.296 00:38:58 -- common/autotest_common.sh@936 -- # '[' -z 2550488 ']' 00:05:06.296 00:38:58 -- common/autotest_common.sh@940 -- # kill -0 2550488 00:05:06.296 00:38:58 -- common/autotest_common.sh@941 -- # uname 00:05:06.296 00:38:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:06.296 00:38:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2550488 00:05:06.296 00:38:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:06.296 00:38:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:06.296 00:38:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2550488' 00:05:06.296 killing process with pid 2550488 00:05:06.296 00:38:58 -- common/autotest_common.sh@955 -- # kill 2550488 00:05:06.296 00:38:58 -- common/autotest_common.sh@960 -- # wait 2550488 00:05:09.584 00:39:02 -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/dsa-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.584 00:39:02 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:09.584 00:39:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:09.584 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.584 00:39:02 -- json_config/json_config.sh@328 -- # return 0 00:05:09.584 00:39:02 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:09.584 INFO: Success 00:05:09.584 00:05:09.584 real 0m32.968s 00:05:09.584 user 0m30.594s 00:05:09.584 sys 0m1.975s 00:05:09.584 00:39:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:09.584 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.584 ************************************ 00:05:09.584 END TEST json_config 00:05:09.584 ************************************ 00:05:09.584 00:39:02 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.584 00:39:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.584 00:39:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.584 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.584 ************************************ 00:05:09.584 START TEST json_config_extra_key 00:05:09.584 ************************************ 00:05:09.584 00:39:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:09.584 00:39:02 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:05:09.584 00:39:02 -- nvmf/common.sh@7 -- # uname -s 00:05:09.844 00:39:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:09.844 00:39:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:09.844 00:39:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:09.844 00:39:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:09.844 00:39:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:09.844 00:39:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:09.844 00:39:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:09.844 00:39:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:09.844 00:39:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:09.844 00:39:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:09.844 00:39:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:05:09.844 00:39:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:05:09.844 00:39:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:09.844 00:39:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:09.844 00:39:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:09.844 00:39:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:09.844 00:39:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:05:09.844 00:39:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:09.844 00:39:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:09.844 00:39:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:09.844 00:39:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.844 00:39:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.844 00:39:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.844 00:39:02 -- paths/export.sh@5 -- # export PATH 00:05:09.844 00:39:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:09.844 00:39:02 -- nvmf/common.sh@47 -- # : 0 00:05:09.844 00:39:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:09.844 00:39:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:09.844 00:39:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:09.845 00:39:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:09.845 00:39:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:09.845 00:39:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:09.845 00:39:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:09.845 00:39:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/common.sh 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:09.845 INFO: launching applications... 00:05:09.845 00:39:02 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.845 00:39:02 -- json_config/common.sh@9 -- # local app=target 00:05:09.845 00:39:02 -- json_config/common.sh@10 -- # shift 00:05:09.845 00:39:02 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:09.845 00:39:02 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:09.845 00:39:02 -- json_config/common.sh@15 -- # local app_extra_params= 00:05:09.845 00:39:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.845 00:39:02 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:09.845 00:39:02 -- json_config/common.sh@22 -- # app_pid["$app"]=2553613 00:05:09.845 00:39:02 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:09.845 Waiting for target to run... 00:05:09.845 00:39:02 -- json_config/common.sh@25 -- # waitforlisten 2553613 /var/tmp/spdk_tgt.sock 00:05:09.845 00:39:02 -- common/autotest_common.sh@817 -- # '[' -z 2553613 ']' 00:05:09.845 00:39:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:09.845 00:39:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:09.845 00:39:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:09.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:09.845 00:39:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:09.845 00:39:02 -- common/autotest_common.sh@10 -- # set +x 00:05:09.845 00:39:02 -- json_config/common.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/extra_key.json 00:05:09.845 [2024-04-27 00:39:02.384965] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:09.845 [2024-04-27 00:39:02.385089] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553613 ] 00:05:09.845 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.105 [2024-04-27 00:39:02.697169] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.105 [2024-04-27 00:39:02.777283] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.677 00:39:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:10.677 00:39:03 -- common/autotest_common.sh@850 -- # return 0 00:05:10.677 00:39:03 -- json_config/common.sh@26 -- # echo '' 00:05:10.677 00:05:10.677 00:39:03 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:10.677 INFO: shutting down applications... 00:05:10.677 00:39:03 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:10.677 00:39:03 -- json_config/common.sh@31 -- # local app=target 00:05:10.677 00:39:03 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:10.677 00:39:03 -- json_config/common.sh@35 -- # [[ -n 2553613 ]] 00:05:10.677 00:39:03 -- json_config/common.sh@38 -- # kill -SIGINT 2553613 00:05:10.677 00:39:03 -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:10.677 00:39:03 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.677 00:39:03 -- json_config/common.sh@41 -- # kill -0 2553613 00:05:10.677 00:39:03 -- json_config/common.sh@45 -- # sleep 0.5 00:05:10.937 00:39:03 -- json_config/common.sh@40 -- # (( i++ )) 00:05:10.937 00:39:03 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:10.937 00:39:03 -- json_config/common.sh@41 -- # kill -0 2553613 00:05:10.937 00:39:03 -- json_config/common.sh@45 -- # sleep 0.5 00:05:11.506 00:39:04 -- json_config/common.sh@40 -- # (( i++ )) 00:05:11.506 00:39:04 -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:11.506 00:39:04 -- json_config/common.sh@41 -- # kill -0 2553613 00:05:11.506 00:39:04 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:11.506 00:39:04 -- json_config/common.sh@43 -- # break 00:05:11.506 00:39:04 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:11.506 00:39:04 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:11.506 SPDK target shutdown done 00:05:11.506 00:39:04 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:11.506 Success 00:05:11.506 00:05:11.506 real 0m1.874s 00:05:11.506 user 0m1.574s 00:05:11.506 sys 0m0.485s 00:05:11.506 00:39:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:11.506 00:39:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.506 ************************************ 00:05:11.506 END TEST json_config_extra_key 00:05:11.506 ************************************ 00:05:11.506 00:39:04 -- spdk/autotest.sh@170 -- # run_test alias_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.506 00:39:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:11.506 00:39:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:11.506 00:39:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.766 ************************************ 00:05:11.766 START TEST alias_rpc 00:05:11.766 ************************************ 00:05:11.766 00:39:04 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:11.766 * Looking for test storage... 00:05:11.766 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/alias_rpc 00:05:11.766 00:39:04 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:11.766 00:39:04 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2553996 00:05:11.766 00:39:04 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2553996 00:05:11.766 00:39:04 -- common/autotest_common.sh@817 -- # '[' -z 2553996 ']' 00:05:11.766 00:39:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:11.766 00:39:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:11.766 00:39:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:11.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:11.766 00:39:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:11.766 00:39:04 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:11.766 00:39:04 -- common/autotest_common.sh@10 -- # set +x 00:05:11.766 [2024-04-27 00:39:04.361159] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:11.766 [2024-04-27 00:39:04.361245] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2553996 ] 00:05:11.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:11.766 [2024-04-27 00:39:04.447112] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.025 [2024-04-27 00:39:04.539272] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.594 00:39:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:12.594 00:39:05 -- common/autotest_common.sh@850 -- # return 0 00:05:12.594 00:39:05 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:12.594 00:39:05 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2553996 00:05:12.594 00:39:05 -- common/autotest_common.sh@936 -- # '[' -z 2553996 ']' 00:05:12.594 00:39:05 -- common/autotest_common.sh@940 -- # kill -0 2553996 00:05:12.594 00:39:05 -- common/autotest_common.sh@941 -- # uname 00:05:12.594 00:39:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:12.594 00:39:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2553996 00:05:12.853 00:39:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:12.853 00:39:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:12.853 00:39:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2553996' 00:05:12.853 killing process with pid 2553996 00:05:12.853 00:39:05 -- common/autotest_common.sh@955 -- # kill 2553996 00:05:12.853 00:39:05 -- common/autotest_common.sh@960 -- # wait 2553996 00:05:13.817 00:05:13.817 real 0m1.950s 00:05:13.817 user 0m2.005s 00:05:13.817 sys 0m0.419s 00:05:13.818 00:39:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:13.818 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.818 ************************************ 00:05:13.818 END TEST alias_rpc 00:05:13.818 ************************************ 00:05:13.818 00:39:06 -- spdk/autotest.sh@172 -- # [[ 0 -eq 0 ]] 00:05:13.818 00:39:06 -- spdk/autotest.sh@173 -- # run_test spdkcli_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.818 00:39:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:13.818 00:39:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:13.818 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.818 ************************************ 00:05:13.818 START TEST spdkcli_tcp 00:05:13.818 ************************************ 00:05:13.818 00:39:06 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:13.818 * Looking for test storage... 00:05:13.818 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:05:13.818 00:39:06 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:13.818 00:39:06 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:13.818 00:39:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:05:13.818 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2554774 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@27 -- # waitforlisten 2554774 00:05:13.818 00:39:06 -- common/autotest_common.sh@817 -- # '[' -z 2554774 ']' 00:05:13.818 00:39:06 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:13.818 00:39:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.818 00:39:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:13.818 00:39:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.818 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.818 00:39:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:13.818 00:39:06 -- common/autotest_common.sh@10 -- # set +x 00:05:13.818 [2024-04-27 00:39:06.433651] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:13.818 [2024-04-27 00:39:06.433756] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2554774 ] 00:05:13.818 EAL: No free 2048 kB hugepages reported on node 1 00:05:14.077 [2024-04-27 00:39:06.555970] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.077 [2024-04-27 00:39:06.659162] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.077 [2024-04-27 00:39:06.659173] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:14.647 00:39:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:14.647 00:39:07 -- common/autotest_common.sh@850 -- # return 0 00:05:14.647 00:39:07 -- spdkcli/tcp.sh@31 -- # socat_pid=2555078 00:05:14.647 00:39:07 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:14.647 00:39:07 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:14.647 [ 00:05:14.647 "bdev_malloc_delete", 00:05:14.647 "bdev_malloc_create", 00:05:14.647 "bdev_null_resize", 00:05:14.647 "bdev_null_delete", 00:05:14.647 "bdev_null_create", 00:05:14.647 "bdev_nvme_cuse_unregister", 00:05:14.647 "bdev_nvme_cuse_register", 00:05:14.647 "bdev_opal_new_user", 00:05:14.647 "bdev_opal_set_lock_state", 00:05:14.647 "bdev_opal_delete", 00:05:14.647 "bdev_opal_get_info", 00:05:14.647 "bdev_opal_create", 00:05:14.647 "bdev_nvme_opal_revert", 00:05:14.647 "bdev_nvme_opal_init", 00:05:14.647 "bdev_nvme_send_cmd", 00:05:14.647 "bdev_nvme_get_path_iostat", 00:05:14.647 "bdev_nvme_get_mdns_discovery_info", 00:05:14.647 "bdev_nvme_stop_mdns_discovery", 00:05:14.647 "bdev_nvme_start_mdns_discovery", 00:05:14.647 "bdev_nvme_set_multipath_policy", 00:05:14.647 "bdev_nvme_set_preferred_path", 00:05:14.647 "bdev_nvme_get_io_paths", 00:05:14.647 "bdev_nvme_remove_error_injection", 00:05:14.647 "bdev_nvme_add_error_injection", 00:05:14.647 "bdev_nvme_get_discovery_info", 00:05:14.647 "bdev_nvme_stop_discovery", 00:05:14.647 "bdev_nvme_start_discovery", 00:05:14.647 "bdev_nvme_get_controller_health_info", 00:05:14.647 "bdev_nvme_disable_controller", 00:05:14.647 "bdev_nvme_enable_controller", 00:05:14.647 "bdev_nvme_reset_controller", 00:05:14.647 "bdev_nvme_get_transport_statistics", 00:05:14.647 "bdev_nvme_apply_firmware", 00:05:14.647 "bdev_nvme_detach_controller", 00:05:14.647 "bdev_nvme_get_controllers", 00:05:14.647 "bdev_nvme_attach_controller", 00:05:14.647 "bdev_nvme_set_hotplug", 00:05:14.647 "bdev_nvme_set_options", 00:05:14.647 "bdev_passthru_delete", 00:05:14.647 "bdev_passthru_create", 00:05:14.647 "bdev_lvol_grow_lvstore", 00:05:14.647 "bdev_lvol_get_lvols", 00:05:14.647 "bdev_lvol_get_lvstores", 00:05:14.647 "bdev_lvol_delete", 00:05:14.647 "bdev_lvol_set_read_only", 00:05:14.647 "bdev_lvol_resize", 00:05:14.647 "bdev_lvol_decouple_parent", 00:05:14.647 "bdev_lvol_inflate", 00:05:14.647 "bdev_lvol_rename", 00:05:14.647 "bdev_lvol_clone_bdev", 00:05:14.647 "bdev_lvol_clone", 00:05:14.647 "bdev_lvol_snapshot", 00:05:14.647 "bdev_lvol_create", 00:05:14.647 "bdev_lvol_delete_lvstore", 00:05:14.647 "bdev_lvol_rename_lvstore", 00:05:14.647 "bdev_lvol_create_lvstore", 00:05:14.647 "bdev_raid_set_options", 00:05:14.647 "bdev_raid_remove_base_bdev", 00:05:14.647 "bdev_raid_add_base_bdev", 00:05:14.647 "bdev_raid_delete", 00:05:14.647 "bdev_raid_create", 00:05:14.647 "bdev_raid_get_bdevs", 00:05:14.647 "bdev_error_inject_error", 00:05:14.647 "bdev_error_delete", 00:05:14.647 "bdev_error_create", 00:05:14.647 "bdev_split_delete", 00:05:14.647 "bdev_split_create", 00:05:14.647 "bdev_delay_delete", 00:05:14.647 "bdev_delay_create", 00:05:14.647 "bdev_delay_update_latency", 00:05:14.647 "bdev_zone_block_delete", 00:05:14.647 "bdev_zone_block_create", 00:05:14.647 "blobfs_create", 00:05:14.647 "blobfs_detect", 00:05:14.647 "blobfs_set_cache_size", 00:05:14.647 "bdev_aio_delete", 00:05:14.647 "bdev_aio_rescan", 00:05:14.647 "bdev_aio_create", 00:05:14.647 "bdev_ftl_set_property", 00:05:14.647 "bdev_ftl_get_properties", 00:05:14.647 "bdev_ftl_get_stats", 00:05:14.647 "bdev_ftl_unmap", 00:05:14.647 "bdev_ftl_unload", 00:05:14.647 "bdev_ftl_delete", 00:05:14.647 "bdev_ftl_load", 00:05:14.647 "bdev_ftl_create", 00:05:14.647 "bdev_virtio_attach_controller", 00:05:14.647 "bdev_virtio_scsi_get_devices", 00:05:14.647 "bdev_virtio_detach_controller", 00:05:14.647 "bdev_virtio_blk_set_hotplug", 00:05:14.647 "bdev_iscsi_delete", 00:05:14.647 "bdev_iscsi_create", 00:05:14.647 "bdev_iscsi_set_options", 00:05:14.647 "accel_error_inject_error", 00:05:14.647 "ioat_scan_accel_module", 00:05:14.647 "dsa_scan_accel_module", 00:05:14.647 "iaa_scan_accel_module", 00:05:14.647 "keyring_file_remove_key", 00:05:14.647 "keyring_file_add_key", 00:05:14.647 "iscsi_get_histogram", 00:05:14.647 "iscsi_enable_histogram", 00:05:14.647 "iscsi_set_options", 00:05:14.647 "iscsi_get_auth_groups", 00:05:14.647 "iscsi_auth_group_remove_secret", 00:05:14.647 "iscsi_auth_group_add_secret", 00:05:14.647 "iscsi_delete_auth_group", 00:05:14.647 "iscsi_create_auth_group", 00:05:14.647 "iscsi_set_discovery_auth", 00:05:14.647 "iscsi_get_options", 00:05:14.647 "iscsi_target_node_request_logout", 00:05:14.647 "iscsi_target_node_set_redirect", 00:05:14.647 "iscsi_target_node_set_auth", 00:05:14.647 "iscsi_target_node_add_lun", 00:05:14.647 "iscsi_get_stats", 00:05:14.647 "iscsi_get_connections", 00:05:14.647 "iscsi_portal_group_set_auth", 00:05:14.647 "iscsi_start_portal_group", 00:05:14.647 "iscsi_delete_portal_group", 00:05:14.647 "iscsi_create_portal_group", 00:05:14.647 "iscsi_get_portal_groups", 00:05:14.647 "iscsi_delete_target_node", 00:05:14.647 "iscsi_target_node_remove_pg_ig_maps", 00:05:14.647 "iscsi_target_node_add_pg_ig_maps", 00:05:14.647 "iscsi_create_target_node", 00:05:14.647 "iscsi_get_target_nodes", 00:05:14.647 "iscsi_delete_initiator_group", 00:05:14.647 "iscsi_initiator_group_remove_initiators", 00:05:14.647 "iscsi_initiator_group_add_initiators", 00:05:14.647 "iscsi_create_initiator_group", 00:05:14.647 "iscsi_get_initiator_groups", 00:05:14.647 "nvmf_set_crdt", 00:05:14.647 "nvmf_set_config", 00:05:14.648 "nvmf_set_max_subsystems", 00:05:14.648 "nvmf_subsystem_get_listeners", 00:05:14.648 "nvmf_subsystem_get_qpairs", 00:05:14.648 "nvmf_subsystem_get_controllers", 00:05:14.648 "nvmf_get_stats", 00:05:14.648 "nvmf_get_transports", 00:05:14.648 "nvmf_create_transport", 00:05:14.648 "nvmf_get_targets", 00:05:14.648 "nvmf_delete_target", 00:05:14.648 "nvmf_create_target", 00:05:14.648 "nvmf_subsystem_allow_any_host", 00:05:14.648 "nvmf_subsystem_remove_host", 00:05:14.648 "nvmf_subsystem_add_host", 00:05:14.648 "nvmf_ns_remove_host", 00:05:14.648 "nvmf_ns_add_host", 00:05:14.648 "nvmf_subsystem_remove_ns", 00:05:14.648 "nvmf_subsystem_add_ns", 00:05:14.648 "nvmf_subsystem_listener_set_ana_state", 00:05:14.648 "nvmf_discovery_get_referrals", 00:05:14.648 "nvmf_discovery_remove_referral", 00:05:14.648 "nvmf_discovery_add_referral", 00:05:14.648 "nvmf_subsystem_remove_listener", 00:05:14.648 "nvmf_subsystem_add_listener", 00:05:14.648 "nvmf_delete_subsystem", 00:05:14.648 "nvmf_create_subsystem", 00:05:14.648 "nvmf_get_subsystems", 00:05:14.648 "env_dpdk_get_mem_stats", 00:05:14.648 "nbd_get_disks", 00:05:14.648 "nbd_stop_disk", 00:05:14.648 "nbd_start_disk", 00:05:14.648 "ublk_recover_disk", 00:05:14.648 "ublk_get_disks", 00:05:14.648 "ublk_stop_disk", 00:05:14.648 "ublk_start_disk", 00:05:14.648 "ublk_destroy_target", 00:05:14.648 "ublk_create_target", 00:05:14.648 "virtio_blk_create_transport", 00:05:14.648 "virtio_blk_get_transports", 00:05:14.648 "vhost_controller_set_coalescing", 00:05:14.648 "vhost_get_controllers", 00:05:14.648 "vhost_delete_controller", 00:05:14.648 "vhost_create_blk_controller", 00:05:14.648 "vhost_scsi_controller_remove_target", 00:05:14.648 "vhost_scsi_controller_add_target", 00:05:14.648 "vhost_start_scsi_controller", 00:05:14.648 "vhost_create_scsi_controller", 00:05:14.648 "thread_set_cpumask", 00:05:14.648 "framework_get_scheduler", 00:05:14.648 "framework_set_scheduler", 00:05:14.648 "framework_get_reactors", 00:05:14.648 "thread_get_io_channels", 00:05:14.648 "thread_get_pollers", 00:05:14.648 "thread_get_stats", 00:05:14.648 "framework_monitor_context_switch", 00:05:14.648 "spdk_kill_instance", 00:05:14.648 "log_enable_timestamps", 00:05:14.648 "log_get_flags", 00:05:14.648 "log_clear_flag", 00:05:14.648 "log_set_flag", 00:05:14.648 "log_get_level", 00:05:14.648 "log_set_level", 00:05:14.648 "log_get_print_level", 00:05:14.648 "log_set_print_level", 00:05:14.648 "framework_enable_cpumask_locks", 00:05:14.648 "framework_disable_cpumask_locks", 00:05:14.648 "framework_wait_init", 00:05:14.648 "framework_start_init", 00:05:14.648 "scsi_get_devices", 00:05:14.648 "bdev_get_histogram", 00:05:14.648 "bdev_enable_histogram", 00:05:14.648 "bdev_set_qos_limit", 00:05:14.648 "bdev_set_qd_sampling_period", 00:05:14.648 "bdev_get_bdevs", 00:05:14.648 "bdev_reset_iostat", 00:05:14.648 "bdev_get_iostat", 00:05:14.648 "bdev_examine", 00:05:14.648 "bdev_wait_for_examine", 00:05:14.648 "bdev_set_options", 00:05:14.648 "notify_get_notifications", 00:05:14.648 "notify_get_types", 00:05:14.648 "accel_get_stats", 00:05:14.648 "accel_set_options", 00:05:14.648 "accel_set_driver", 00:05:14.648 "accel_crypto_key_destroy", 00:05:14.648 "accel_crypto_keys_get", 00:05:14.648 "accel_crypto_key_create", 00:05:14.648 "accel_assign_opc", 00:05:14.648 "accel_get_module_info", 00:05:14.648 "accel_get_opc_assignments", 00:05:14.648 "vmd_rescan", 00:05:14.648 "vmd_remove_device", 00:05:14.648 "vmd_enable", 00:05:14.648 "sock_get_default_impl", 00:05:14.648 "sock_set_default_impl", 00:05:14.648 "sock_impl_set_options", 00:05:14.648 "sock_impl_get_options", 00:05:14.648 "iobuf_get_stats", 00:05:14.648 "iobuf_set_options", 00:05:14.648 "framework_get_pci_devices", 00:05:14.648 "framework_get_config", 00:05:14.648 "framework_get_subsystems", 00:05:14.648 "trace_get_info", 00:05:14.648 "trace_get_tpoint_group_mask", 00:05:14.648 "trace_disable_tpoint_group", 00:05:14.648 "trace_enable_tpoint_group", 00:05:14.648 "trace_clear_tpoint_mask", 00:05:14.648 "trace_set_tpoint_mask", 00:05:14.648 "keyring_get_keys", 00:05:14.648 "spdk_get_version", 00:05:14.648 "rpc_get_methods" 00:05:14.648 ] 00:05:14.910 00:39:07 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:14.910 00:39:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:05:14.910 00:39:07 -- common/autotest_common.sh@10 -- # set +x 00:05:14.910 00:39:07 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:14.910 00:39:07 -- spdkcli/tcp.sh@38 -- # killprocess 2554774 00:05:14.910 00:39:07 -- common/autotest_common.sh@936 -- # '[' -z 2554774 ']' 00:05:14.910 00:39:07 -- common/autotest_common.sh@940 -- # kill -0 2554774 00:05:14.910 00:39:07 -- common/autotest_common.sh@941 -- # uname 00:05:14.910 00:39:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:14.910 00:39:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2554774 00:05:14.910 00:39:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:14.910 00:39:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:14.910 00:39:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2554774' 00:05:14.910 killing process with pid 2554774 00:05:14.910 00:39:07 -- common/autotest_common.sh@955 -- # kill 2554774 00:05:14.910 00:39:07 -- common/autotest_common.sh@960 -- # wait 2554774 00:05:15.935 00:05:15.935 real 0m1.976s 00:05:15.935 user 0m3.464s 00:05:15.935 sys 0m0.483s 00:05:15.935 00:39:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:15.935 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.935 ************************************ 00:05:15.935 END TEST spdkcli_tcp 00:05:15.935 ************************************ 00:05:15.936 00:39:08 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.936 00:39:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:15.936 00:39:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:15.936 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.936 ************************************ 00:05:15.936 START TEST dpdk_mem_utility 00:05:15.936 ************************************ 00:05:15.936 00:39:08 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:15.936 * Looking for test storage... 00:05:15.936 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/dpdk_memory_utility 00:05:15.936 00:39:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:15.936 00:39:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2555449 00:05:15.936 00:39:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2555449 00:05:15.936 00:39:08 -- common/autotest_common.sh@817 -- # '[' -z 2555449 ']' 00:05:15.936 00:39:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.936 00:39:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:15.936 00:39:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.936 00:39:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:15.936 00:39:08 -- common/autotest_common.sh@10 -- # set +x 00:05:15.936 00:39:08 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:05:15.936 [2024-04-27 00:39:08.518293] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:15.936 [2024-04-27 00:39:08.518379] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555449 ] 00:05:15.936 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.198 [2024-04-27 00:39:08.608521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.198 [2024-04-27 00:39:08.698316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.771 00:39:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:16.771 00:39:09 -- common/autotest_common.sh@850 -- # return 0 00:05:16.771 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:16.771 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:16.771 00:39:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:16.771 00:39:09 -- common/autotest_common.sh@10 -- # set +x 00:05:16.771 { 00:05:16.771 "filename": "/tmp/spdk_mem_dump.txt" 00:05:16.771 } 00:05:16.771 00:39:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:16.771 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.771 DPDK memory size 820.000000 MiB in 1 heap(s) 00:05:16.771 1 heaps totaling size 820.000000 MiB 00:05:16.771 size: 820.000000 MiB heap id: 0 00:05:16.771 end heaps---------- 00:05:16.771 8 mempools totaling size 598.116089 MiB 00:05:16.771 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:16.771 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:16.771 size: 84.521057 MiB name: bdev_io_2555449 00:05:16.771 size: 51.011292 MiB name: evtpool_2555449 00:05:16.771 size: 50.003479 MiB name: msgpool_2555449 00:05:16.771 size: 21.763794 MiB name: PDU_Pool 00:05:16.771 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:16.771 size: 0.026123 MiB name: Session_Pool 00:05:16.771 end mempools------- 00:05:16.771 6 memzones totaling size 4.142822 MiB 00:05:16.771 size: 1.000366 MiB name: RG_ring_0_2555449 00:05:16.771 size: 1.000366 MiB name: RG_ring_1_2555449 00:05:16.771 size: 1.000366 MiB name: RG_ring_4_2555449 00:05:16.771 size: 1.000366 MiB name: RG_ring_5_2555449 00:05:16.771 size: 0.125366 MiB name: RG_ring_2_2555449 00:05:16.771 size: 0.015991 MiB name: RG_ring_3_2555449 00:05:16.771 end memzones------- 00:05:16.771 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:16.771 heap id: 0 total size: 820.000000 MiB number of busy elements: 41 number of free elements: 19 00:05:16.771 list of free elements. size: 18.514832 MiB 00:05:16.771 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:16.771 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:16.771 element at address: 0x200007000000 with size: 1.995972 MiB 00:05:16.771 element at address: 0x20000b200000 with size: 1.995972 MiB 00:05:16.771 element at address: 0x200019100040 with size: 0.999939 MiB 00:05:16.771 element at address: 0x200019500040 with size: 0.999939 MiB 00:05:16.771 element at address: 0x200019600000 with size: 0.999329 MiB 00:05:16.771 element at address: 0x200003e00000 with size: 0.996094 MiB 00:05:16.771 element at address: 0x200032200000 with size: 0.994324 MiB 00:05:16.771 element at address: 0x200018e00000 with size: 0.959900 MiB 00:05:16.771 element at address: 0x200019900040 with size: 0.937256 MiB 00:05:16.771 element at address: 0x200000200000 with size: 0.840942 MiB 00:05:16.771 element at address: 0x20001b000000 with size: 0.583191 MiB 00:05:16.771 element at address: 0x200019200000 with size: 0.491150 MiB 00:05:16.771 element at address: 0x200019a00000 with size: 0.485657 MiB 00:05:16.771 element at address: 0x200013800000 with size: 0.470581 MiB 00:05:16.771 element at address: 0x200028400000 with size: 0.411072 MiB 00:05:16.771 element at address: 0x200003a00000 with size: 0.356140 MiB 00:05:16.771 element at address: 0x20000b1ff040 with size: 0.001038 MiB 00:05:16.771 list of standard malloc elements. size: 199.220764 MiB 00:05:16.771 element at address: 0x20000b3fef80 with size: 132.000183 MiB 00:05:16.771 element at address: 0x2000071fef80 with size: 64.000183 MiB 00:05:16.771 element at address: 0x200018ffff80 with size: 1.000183 MiB 00:05:16.771 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:05:16.771 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:05:16.771 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:16.771 element at address: 0x2000199eff40 with size: 0.062683 MiB 00:05:16.771 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:16.771 element at address: 0x2000137ff040 with size: 0.000427 MiB 00:05:16.771 element at address: 0x2000137ffa00 with size: 0.000366 MiB 00:05:16.771 element at address: 0x2000002d7480 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000002d7580 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000002d7680 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:16.771 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:16.771 element at address: 0x200003aff980 with size: 0.000244 MiB 00:05:16.771 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:16.771 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff480 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff580 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff680 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff780 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff880 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ff980 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ffc00 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ffd00 with size: 0.000244 MiB 00:05:16.771 element at address: 0x20000b1ffe00 with size: 0.000244 MiB 00:05:16.772 element at address: 0x20000b1fff00 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff200 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff300 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff400 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff500 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff600 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff700 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff800 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ff900 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ffb80 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137ffc80 with size: 0.000244 MiB 00:05:16.772 element at address: 0x2000137fff00 with size: 0.000244 MiB 00:05:16.772 list of memzone associated elements. size: 602.264404 MiB 00:05:16.772 element at address: 0x20001b0954c0 with size: 211.416809 MiB 00:05:16.772 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:16.772 element at address: 0x20002846ff80 with size: 157.562622 MiB 00:05:16.772 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:16.772 element at address: 0x2000139fab40 with size: 84.020691 MiB 00:05:16.772 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2555449_0 00:05:16.772 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:16.772 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2555449_0 00:05:16.772 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:16.772 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2555449_0 00:05:16.772 element at address: 0x200019bbe900 with size: 20.255615 MiB 00:05:16.772 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:16.772 element at address: 0x2000323feb00 with size: 18.005127 MiB 00:05:16.772 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:16.772 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:16.772 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2555449 00:05:16.772 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:16.772 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2555449 00:05:16.772 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:16.772 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2555449 00:05:16.772 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:05:16.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:16.772 element at address: 0x200019abc780 with size: 1.008179 MiB 00:05:16.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:16.772 element at address: 0x200018efde00 with size: 1.008179 MiB 00:05:16.772 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:16.772 element at address: 0x2000138f89c0 with size: 1.008179 MiB 00:05:16.772 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:16.772 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:16.772 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2555449 00:05:16.772 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:16.772 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2555449 00:05:16.772 element at address: 0x2000196ffd40 with size: 1.000549 MiB 00:05:16.772 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2555449 00:05:16.772 element at address: 0x2000322fe8c0 with size: 1.000549 MiB 00:05:16.772 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2555449 00:05:16.772 element at address: 0x200003a5b2c0 with size: 0.500549 MiB 00:05:16.772 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2555449 00:05:16.772 element at address: 0x20001927dbc0 with size: 0.500549 MiB 00:05:16.772 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:16.772 element at address: 0x200013878780 with size: 0.500549 MiB 00:05:16.772 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:16.772 element at address: 0x200019a7c540 with size: 0.250549 MiB 00:05:16.772 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:16.772 element at address: 0x200003adf740 with size: 0.125549 MiB 00:05:16.772 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2555449 00:05:16.772 element at address: 0x200018ef5bc0 with size: 0.031799 MiB 00:05:16.772 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:16.772 element at address: 0x2000284693c0 with size: 0.023804 MiB 00:05:16.772 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:16.772 element at address: 0x200003adb500 with size: 0.016174 MiB 00:05:16.772 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2555449 00:05:16.772 element at address: 0x20002846f540 with size: 0.002502 MiB 00:05:16.772 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:16.772 element at address: 0x2000002d7780 with size: 0.000366 MiB 00:05:16.772 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2555449 00:05:16.772 element at address: 0x2000137ffd80 with size: 0.000366 MiB 00:05:16.772 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2555449 00:05:16.772 element at address: 0x20000b1ffa80 with size: 0.000366 MiB 00:05:16.772 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:16.772 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:16.772 00:39:09 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2555449 00:05:16.772 00:39:09 -- common/autotest_common.sh@936 -- # '[' -z 2555449 ']' 00:05:16.772 00:39:09 -- common/autotest_common.sh@940 -- # kill -0 2555449 00:05:16.772 00:39:09 -- common/autotest_common.sh@941 -- # uname 00:05:16.772 00:39:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:16.772 00:39:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2555449 00:05:16.772 00:39:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:16.772 00:39:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:16.772 00:39:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2555449' 00:05:16.772 killing process with pid 2555449 00:05:16.772 00:39:09 -- common/autotest_common.sh@955 -- # kill 2555449 00:05:16.772 00:39:09 -- common/autotest_common.sh@960 -- # wait 2555449 00:05:17.715 00:05:17.715 real 0m1.905s 00:05:17.715 user 0m1.914s 00:05:17.715 sys 0m0.428s 00:05:17.715 00:39:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:17.715 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.715 ************************************ 00:05:17.715 END TEST dpdk_mem_utility 00:05:17.715 ************************************ 00:05:17.715 00:39:10 -- spdk/autotest.sh@177 -- # run_test event /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:05:17.715 00:39:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:17.715 00:39:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.715 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.715 ************************************ 00:05:17.715 START TEST event 00:05:17.715 ************************************ 00:05:17.715 00:39:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event.sh 00:05:17.975 * Looking for test storage... 00:05:17.975 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:17.975 00:39:10 -- event/event.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:17.975 00:39:10 -- bdev/nbd_common.sh@6 -- # set -e 00:05:17.975 00:39:10 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.975 00:39:10 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:05:17.975 00:39:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:17.975 00:39:10 -- common/autotest_common.sh@10 -- # set +x 00:05:17.975 ************************************ 00:05:17.975 START TEST event_perf 00:05:17.975 ************************************ 00:05:17.975 00:39:10 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:17.975 Running I/O for 1 seconds...[2024-04-27 00:39:10.589145] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:17.975 [2024-04-27 00:39:10.589274] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2555836 ] 00:05:17.975 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.237 [2024-04-27 00:39:10.706476] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.237 [2024-04-27 00:39:10.798824] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.237 [2024-04-27 00:39:10.798908] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.237 [2024-04-27 00:39:10.799008] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.237 [2024-04-27 00:39:10.799016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:19.616 Running I/O for 1 seconds... 00:05:19.616 lcore 0: 134276 00:05:19.616 lcore 1: 134276 00:05:19.616 lcore 2: 134272 00:05:19.616 lcore 3: 134275 00:05:19.616 done. 00:05:19.616 00:05:19.616 real 0m1.390s 00:05:19.616 user 0m4.236s 00:05:19.616 sys 0m0.139s 00:05:19.616 00:39:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:19.616 00:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.616 ************************************ 00:05:19.616 END TEST event_perf 00:05:19.616 ************************************ 00:05:19.616 00:39:11 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.616 00:39:11 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:19.616 00:39:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:19.616 00:39:11 -- common/autotest_common.sh@10 -- # set +x 00:05:19.616 ************************************ 00:05:19.616 START TEST event_reactor 00:05:19.616 ************************************ 00:05:19.616 00:39:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.616 [2024-04-27 00:39:12.104248] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:19.616 [2024-04-27 00:39:12.104350] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556161 ] 00:05:19.616 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.616 [2024-04-27 00:39:12.219530] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.616 [2024-04-27 00:39:12.309933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.999 test_start 00:05:20.999 oneshot 00:05:20.999 tick 100 00:05:20.999 tick 100 00:05:20.999 tick 250 00:05:20.999 tick 100 00:05:20.999 tick 100 00:05:20.999 tick 100 00:05:20.999 tick 250 00:05:20.999 tick 500 00:05:20.999 tick 100 00:05:20.999 tick 100 00:05:20.999 tick 250 00:05:20.999 tick 100 00:05:20.999 tick 100 00:05:20.999 test_end 00:05:20.999 00:05:20.999 real 0m1.386s 00:05:20.999 user 0m1.255s 00:05:20.999 sys 0m0.125s 00:05:20.999 00:39:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:20.999 00:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.999 ************************************ 00:05:20.999 END TEST event_reactor 00:05:20.999 ************************************ 00:05:20.999 00:39:13 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.999 00:39:13 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:05:20.999 00:39:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:20.999 00:39:13 -- common/autotest_common.sh@10 -- # set +x 00:05:20.999 ************************************ 00:05:20.999 START TEST event_reactor_perf 00:05:20.999 ************************************ 00:05:20.999 00:39:13 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.999 [2024-04-27 00:39:13.613175] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:20.999 [2024-04-27 00:39:13.613285] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556490 ] 00:05:20.999 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.257 [2024-04-27 00:39:13.729722] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.257 [2024-04-27 00:39:13.819483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.639 test_start 00:05:22.639 test_end 00:05:22.639 Performance: 416488 events per second 00:05:22.639 00:05:22.639 real 0m1.385s 00:05:22.639 user 0m1.259s 00:05:22.639 sys 0m0.120s 00:05:22.639 00:39:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:22.639 00:39:14 -- common/autotest_common.sh@10 -- # set +x 00:05:22.639 ************************************ 00:05:22.639 END TEST event_reactor_perf 00:05:22.639 ************************************ 00:05:22.639 00:39:14 -- event/event.sh@49 -- # uname -s 00:05:22.639 00:39:15 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.639 00:39:15 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.639 00:39:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.639 00:39:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.639 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.639 ************************************ 00:05:22.639 START TEST event_scheduler 00:05:22.639 ************************************ 00:05:22.639 00:39:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.639 * Looking for test storage... 00:05:22.639 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler 00:05:22.639 00:39:15 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.639 00:39:15 -- scheduler/scheduler.sh@35 -- # scheduler_pid=2556841 00:05:22.639 00:39:15 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.639 00:39:15 -- scheduler/scheduler.sh@37 -- # waitforlisten 2556841 00:05:22.639 00:39:15 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.639 00:39:15 -- common/autotest_common.sh@817 -- # '[' -z 2556841 ']' 00:05:22.639 00:39:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.640 00:39:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:22.640 00:39:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.640 00:39:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:22.640 00:39:15 -- common/autotest_common.sh@10 -- # set +x 00:05:22.640 [2024-04-27 00:39:15.206866] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:22.640 [2024-04-27 00:39:15.206933] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2556841 ] 00:05:22.640 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.640 [2024-04-27 00:39:15.292211] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.899 [2024-04-27 00:39:15.390948] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.899 [2024-04-27 00:39:15.391120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.899 [2024-04-27 00:39:15.391204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.899 [2024-04-27 00:39:15.391193] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.468 00:39:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:23.468 00:39:16 -- common/autotest_common.sh@850 -- # return 0 00:05:23.468 00:39:16 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.468 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.468 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.468 POWER: Env isn't set yet! 00:05:23.468 POWER: Attempting to initialise ACPI cpufreq power management... 00:05:23.468 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:23.468 POWER: Cannot set governor of lcore 0 to userspace 00:05:23.468 POWER: Attempting to initialise PSTAT power management... 00:05:23.468 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:05:23.468 POWER: Initialized successfully for lcore 0 power management 00:05:23.468 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:05:23.468 POWER: Initialized successfully for lcore 1 power management 00:05:23.468 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:05:23.469 POWER: Initialized successfully for lcore 2 power management 00:05:23.469 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:05:23.469 POWER: Initialized successfully for lcore 3 power management 00:05:23.469 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.469 00:39:16 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.469 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.469 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 [2024-04-27 00:39:16.232110] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.728 00:39:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:23.728 00:39:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 ************************************ 00:05:23.728 START TEST scheduler_create_thread 00:05:23.728 ************************************ 00:05:23.728 00:39:16 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.728 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 2 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.728 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 3 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.728 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 4 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.728 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 5 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.728 00:39:16 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.728 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.728 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.728 6 00:05:23.728 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 7 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 8 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 9 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 10 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.729 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.729 00:39:16 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.729 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.729 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:23.989 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:23.989 00:39:16 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:23.989 00:39:16 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:23.989 00:39:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:23.989 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.249 00:39:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:24.249 00:05:24.249 real 0m0.592s 00:05:24.249 user 0m0.011s 00:05:24.249 sys 0m0.005s 00:05:24.249 00:39:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:24.249 00:39:16 -- common/autotest_common.sh@10 -- # set +x 00:05:24.249 ************************************ 00:05:24.249 END TEST scheduler_create_thread 00:05:24.249 ************************************ 00:05:24.508 00:39:16 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:24.508 00:39:16 -- scheduler/scheduler.sh@46 -- # killprocess 2556841 00:05:24.508 00:39:16 -- common/autotest_common.sh@936 -- # '[' -z 2556841 ']' 00:05:24.508 00:39:16 -- common/autotest_common.sh@940 -- # kill -0 2556841 00:05:24.508 00:39:16 -- common/autotest_common.sh@941 -- # uname 00:05:24.508 00:39:16 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:24.508 00:39:16 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2556841 00:05:24.508 00:39:17 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:05:24.508 00:39:17 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:05:24.508 00:39:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2556841' 00:05:24.508 killing process with pid 2556841 00:05:24.508 00:39:17 -- common/autotest_common.sh@955 -- # kill 2556841 00:05:24.508 00:39:17 -- common/autotest_common.sh@960 -- # wait 2556841 00:05:24.766 [2024-04-27 00:39:17.396386] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:25.025 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:05:25.025 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:05:25.025 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:05:25.025 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:05:25.026 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:05:25.026 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:05:25.026 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:05:25.026 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:05:25.286 00:05:25.286 real 0m2.755s 00:05:25.286 user 0m5.487s 00:05:25.286 sys 0m0.443s 00:05:25.286 00:39:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:25.286 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.286 ************************************ 00:05:25.286 END TEST event_scheduler 00:05:25.286 ************************************ 00:05:25.286 00:39:17 -- event/event.sh@51 -- # modprobe -n nbd 00:05:25.286 00:39:17 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:25.286 00:39:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:25.286 00:39:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:25.286 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.286 ************************************ 00:05:25.286 START TEST app_repeat 00:05:25.286 ************************************ 00:05:25.286 00:39:17 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:05:25.286 00:39:17 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:25.286 00:39:17 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:25.286 00:39:17 -- event/event.sh@13 -- # local nbd_list 00:05:25.286 00:39:17 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:25.286 00:39:17 -- event/event.sh@14 -- # local bdev_list 00:05:25.286 00:39:17 -- event/event.sh@15 -- # local repeat_times=4 00:05:25.286 00:39:17 -- event/event.sh@17 -- # modprobe nbd 00:05:25.546 00:39:17 -- event/event.sh@19 -- # repeat_pid=2557506 00:05:25.546 00:39:17 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.546 00:39:17 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2557506' 00:05:25.546 Process app_repeat pid: 2557506 00:05:25.546 00:39:17 -- event/event.sh@23 -- # for i in {0..2} 00:05:25.546 00:39:17 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:25.546 spdk_app_start Round 0 00:05:25.546 00:39:17 -- event/event.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:25.546 00:39:17 -- event/event.sh@25 -- # waitforlisten 2557506 /var/tmp/spdk-nbd.sock 00:05:25.546 00:39:17 -- common/autotest_common.sh@817 -- # '[' -z 2557506 ']' 00:05:25.546 00:39:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:25.546 00:39:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:25.546 00:39:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:25.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:25.546 00:39:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:25.546 00:39:17 -- common/autotest_common.sh@10 -- # set +x 00:05:25.546 [2024-04-27 00:39:18.024176] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:25.546 [2024-04-27 00:39:18.024292] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2557506 ] 00:05:25.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.546 [2024-04-27 00:39:18.145995] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:25.546 [2024-04-27 00:39:18.241168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.546 [2024-04-27 00:39:18.241194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.114 00:39:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:26.114 00:39:18 -- common/autotest_common.sh@850 -- # return 0 00:05:26.114 00:39:18 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.375 Malloc0 00:05:26.375 00:39:18 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:26.635 Malloc1 00:05:26.635 00:39:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@12 -- # local i 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:26.635 /dev/nbd0 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:26.635 00:39:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:26.635 00:39:19 -- common/autotest_common.sh@855 -- # local i 00:05:26.635 00:39:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.635 00:39:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.635 00:39:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:26.635 00:39:19 -- common/autotest_common.sh@859 -- # break 00:05:26.635 00:39:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.635 00:39:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.635 00:39:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.635 1+0 records in 00:05:26.635 1+0 records out 00:05:26.635 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000279609 s, 14.6 MB/s 00:05:26.635 00:39:19 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:26.635 00:39:19 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.635 00:39:19 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:26.635 00:39:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.635 00:39:19 -- common/autotest_common.sh@875 -- # return 0 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.635 00:39:19 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:26.896 /dev/nbd1 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:26.896 00:39:19 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:26.896 00:39:19 -- common/autotest_common.sh@855 -- # local i 00:05:26.896 00:39:19 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:26.896 00:39:19 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:26.896 00:39:19 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:26.896 00:39:19 -- common/autotest_common.sh@859 -- # break 00:05:26.896 00:39:19 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:26.896 00:39:19 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:26.896 00:39:19 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:26.896 1+0 records in 00:05:26.896 1+0 records out 00:05:26.896 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244021 s, 16.8 MB/s 00:05:26.896 00:39:19 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:26.896 00:39:19 -- common/autotest_common.sh@872 -- # size=4096 00:05:26.896 00:39:19 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:26.896 00:39:19 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:26.896 00:39:19 -- common/autotest_common.sh@875 -- # return 0 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.896 00:39:19 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:27.157 { 00:05:27.157 "nbd_device": "/dev/nbd0", 00:05:27.157 "bdev_name": "Malloc0" 00:05:27.157 }, 00:05:27.157 { 00:05:27.157 "nbd_device": "/dev/nbd1", 00:05:27.157 "bdev_name": "Malloc1" 00:05:27.157 } 00:05:27.157 ]' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:27.157 { 00:05:27.157 "nbd_device": "/dev/nbd0", 00:05:27.157 "bdev_name": "Malloc0" 00:05:27.157 }, 00:05:27.157 { 00:05:27.157 "nbd_device": "/dev/nbd1", 00:05:27.157 "bdev_name": "Malloc1" 00:05:27.157 } 00:05:27.157 ]' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:27.157 /dev/nbd1' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:27.157 /dev/nbd1' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@65 -- # count=2 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@95 -- # count=2 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:27.157 256+0 records in 00:05:27.157 256+0 records out 00:05:27.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00474362 s, 221 MB/s 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:27.157 256+0 records in 00:05:27.157 256+0 records out 00:05:27.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150577 s, 69.6 MB/s 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:27.157 256+0 records in 00:05:27.157 256+0 records out 00:05:27.157 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0166671 s, 62.9 MB/s 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:27.157 00:39:19 -- bdev/nbd_common.sh@51 -- # local i 00:05:27.158 00:39:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.158 00:39:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@41 -- # break 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:27.416 00:39:19 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@41 -- # break 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@45 -- # return 0 00:05:27.416 00:39:20 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:27.417 00:39:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.417 00:39:20 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@65 -- # true 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@65 -- # count=0 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@104 -- # count=0 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:27.677 00:39:20 -- bdev/nbd_common.sh@109 -- # return 0 00:05:27.677 00:39:20 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:27.937 00:39:20 -- event/event.sh@35 -- # sleep 3 00:05:28.507 [2024-04-27 00:39:20.930630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:28.507 [2024-04-27 00:39:21.016595] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.507 [2024-04-27 00:39:21.016597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:28.507 [2024-04-27 00:39:21.096021] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:28.507 [2024-04-27 00:39:21.096069] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:31.058 00:39:23 -- event/event.sh@23 -- # for i in {0..2} 00:05:31.058 00:39:23 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:31.058 spdk_app_start Round 1 00:05:31.058 00:39:23 -- event/event.sh@25 -- # waitforlisten 2557506 /var/tmp/spdk-nbd.sock 00:05:31.058 00:39:23 -- common/autotest_common.sh@817 -- # '[' -z 2557506 ']' 00:05:31.058 00:39:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:31.058 00:39:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:31.058 00:39:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:31.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:31.058 00:39:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:31.058 00:39:23 -- common/autotest_common.sh@10 -- # set +x 00:05:31.058 00:39:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:31.058 00:39:23 -- common/autotest_common.sh@850 -- # return 0 00:05:31.058 00:39:23 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.318 Malloc0 00:05:31.318 00:39:23 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:31.318 Malloc1 00:05:31.318 00:39:23 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@12 -- # local i 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.318 00:39:23 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:31.578 /dev/nbd0 00:05:31.578 00:39:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:31.578 00:39:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:31.578 00:39:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:31.578 00:39:24 -- common/autotest_common.sh@855 -- # local i 00:05:31.578 00:39:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:31.578 00:39:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:31.578 00:39:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:31.578 00:39:24 -- common/autotest_common.sh@859 -- # break 00:05:31.578 00:39:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:31.578 00:39:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:31.578 00:39:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.578 1+0 records in 00:05:31.578 1+0 records out 00:05:31.578 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198931 s, 20.6 MB/s 00:05:31.578 00:39:24 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:31.578 00:39:24 -- common/autotest_common.sh@872 -- # size=4096 00:05:31.578 00:39:24 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:31.578 00:39:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:31.578 00:39:24 -- common/autotest_common.sh@875 -- # return 0 00:05:31.578 00:39:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.578 00:39:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.578 00:39:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:31.578 /dev/nbd1 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:31.839 00:39:24 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:31.839 00:39:24 -- common/autotest_common.sh@855 -- # local i 00:05:31.839 00:39:24 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:31.839 00:39:24 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:31.839 00:39:24 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:31.839 00:39:24 -- common/autotest_common.sh@859 -- # break 00:05:31.839 00:39:24 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:31.839 00:39:24 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:31.839 00:39:24 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:31.839 1+0 records in 00:05:31.839 1+0 records out 00:05:31.839 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000280872 s, 14.6 MB/s 00:05:31.839 00:39:24 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:31.839 00:39:24 -- common/autotest_common.sh@872 -- # size=4096 00:05:31.839 00:39:24 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:31.839 00:39:24 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:31.839 00:39:24 -- common/autotest_common.sh@875 -- # return 0 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:31.839 { 00:05:31.839 "nbd_device": "/dev/nbd0", 00:05:31.839 "bdev_name": "Malloc0" 00:05:31.839 }, 00:05:31.839 { 00:05:31.839 "nbd_device": "/dev/nbd1", 00:05:31.839 "bdev_name": "Malloc1" 00:05:31.839 } 00:05:31.839 ]' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:31.839 { 00:05:31.839 "nbd_device": "/dev/nbd0", 00:05:31.839 "bdev_name": "Malloc0" 00:05:31.839 }, 00:05:31.839 { 00:05:31.839 "nbd_device": "/dev/nbd1", 00:05:31.839 "bdev_name": "Malloc1" 00:05:31.839 } 00:05:31.839 ]' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:31.839 /dev/nbd1' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:31.839 /dev/nbd1' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@65 -- # count=2 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@95 -- # count=2 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:31.839 256+0 records in 00:05:31.839 256+0 records out 00:05:31.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00449255 s, 233 MB/s 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:31.839 00:39:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:32.097 256+0 records in 00:05:32.097 256+0 records out 00:05:32.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163936 s, 64.0 MB/s 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:32.097 256+0 records in 00:05:32.097 256+0 records out 00:05:32.097 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162642 s, 64.5 MB/s 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@51 -- # local i 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@41 -- # break 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:32.097 00:39:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@41 -- # break 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@45 -- # return 0 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.355 00:39:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:32.355 00:39:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:32.355 00:39:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:32.355 00:39:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@65 -- # true 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@65 -- # count=0 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@104 -- # count=0 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:32.614 00:39:25 -- bdev/nbd_common.sh@109 -- # return 0 00:05:32.614 00:39:25 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:32.614 00:39:25 -- event/event.sh@35 -- # sleep 3 00:05:33.264 [2024-04-27 00:39:25.761003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:33.264 [2024-04-27 00:39:25.845847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.264 [2024-04-27 00:39:25.845865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:33.264 [2024-04-27 00:39:25.926461] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:33.264 [2024-04-27 00:39:25.926500] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:35.799 00:39:28 -- event/event.sh@23 -- # for i in {0..2} 00:05:35.799 00:39:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:35.799 spdk_app_start Round 2 00:05:35.799 00:39:28 -- event/event.sh@25 -- # waitforlisten 2557506 /var/tmp/spdk-nbd.sock 00:05:35.799 00:39:28 -- common/autotest_common.sh@817 -- # '[' -z 2557506 ']' 00:05:35.799 00:39:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:35.799 00:39:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:35.799 00:39:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:35.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:35.799 00:39:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:35.799 00:39:28 -- common/autotest_common.sh@10 -- # set +x 00:05:35.799 00:39:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:35.799 00:39:28 -- common/autotest_common.sh@850 -- # return 0 00:05:35.799 00:39:28 -- event/event.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.058 Malloc0 00:05:36.058 00:39:28 -- event/event.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:36.058 Malloc1 00:05:36.319 00:39:28 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@12 -- # local i 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:36.319 /dev/nbd0 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:36.319 00:39:28 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:36.319 00:39:28 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:05:36.319 00:39:28 -- common/autotest_common.sh@855 -- # local i 00:05:36.319 00:39:28 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:36.319 00:39:28 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:36.319 00:39:28 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:05:36.319 00:39:28 -- common/autotest_common.sh@859 -- # break 00:05:36.319 00:39:28 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:36.319 00:39:28 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:36.319 00:39:28 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.319 1+0 records in 00:05:36.319 1+0 records out 00:05:36.319 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000324851 s, 12.6 MB/s 00:05:36.319 00:39:28 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:36.319 00:39:28 -- common/autotest_common.sh@872 -- # size=4096 00:05:36.319 00:39:28 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:36.319 00:39:28 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:36.319 00:39:28 -- common/autotest_common.sh@875 -- # return 0 00:05:36.320 00:39:28 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.320 00:39:28 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.320 00:39:28 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:36.580 /dev/nbd1 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:36.580 00:39:29 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:05:36.580 00:39:29 -- common/autotest_common.sh@855 -- # local i 00:05:36.580 00:39:29 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:05:36.580 00:39:29 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:05:36.580 00:39:29 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:05:36.580 00:39:29 -- common/autotest_common.sh@859 -- # break 00:05:36.580 00:39:29 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:05:36.580 00:39:29 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:05:36.580 00:39:29 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:36.580 1+0 records in 00:05:36.580 1+0 records out 00:05:36.580 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000258736 s, 15.8 MB/s 00:05:36.580 00:39:29 -- common/autotest_common.sh@872 -- # stat -c %s /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:36.580 00:39:29 -- common/autotest_common.sh@872 -- # size=4096 00:05:36.580 00:39:29 -- common/autotest_common.sh@873 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdtest 00:05:36.580 00:39:29 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:05:36.580 00:39:29 -- common/autotest_common.sh@875 -- # return 0 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.580 00:39:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:36.838 00:39:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:36.838 { 00:05:36.838 "nbd_device": "/dev/nbd0", 00:05:36.838 "bdev_name": "Malloc0" 00:05:36.838 }, 00:05:36.838 { 00:05:36.838 "nbd_device": "/dev/nbd1", 00:05:36.838 "bdev_name": "Malloc1" 00:05:36.838 } 00:05:36.838 ]' 00:05:36.838 00:39:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:36.838 { 00:05:36.839 "nbd_device": "/dev/nbd0", 00:05:36.839 "bdev_name": "Malloc0" 00:05:36.839 }, 00:05:36.839 { 00:05:36.839 "nbd_device": "/dev/nbd1", 00:05:36.839 "bdev_name": "Malloc1" 00:05:36.839 } 00:05:36.839 ]' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:36.839 /dev/nbd1' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:36.839 /dev/nbd1' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@65 -- # count=2 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@95 -- # count=2 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:36.839 256+0 records in 00:05:36.839 256+0 records out 00:05:36.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00475921 s, 220 MB/s 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:36.839 256+0 records in 00:05:36.839 256+0 records out 00:05:36.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152053 s, 69.0 MB/s 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:36.839 256+0 records in 00:05:36.839 256+0 records out 00:05:36.839 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0162668 s, 64.5 MB/s 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/nbdrandtest 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@51 -- # local i 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:36.839 00:39:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@41 -- # break 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@41 -- # break 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@45 -- # return 0 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.098 00:39:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@65 -- # echo '' 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@65 -- # true 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@65 -- # count=0 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@66 -- # echo 0 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@104 -- # count=0 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:37.357 00:39:29 -- bdev/nbd_common.sh@109 -- # return 0 00:05:37.357 00:39:29 -- event/event.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:37.617 00:39:30 -- event/event.sh@35 -- # sleep 3 00:05:37.877 [2024-04-27 00:39:30.560787] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:38.137 [2024-04-27 00:39:30.655426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.137 [2024-04-27 00:39:30.655428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.137 [2024-04-27 00:39:30.737801] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:38.137 [2024-04-27 00:39:30.737841] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:40.677 00:39:33 -- event/event.sh@38 -- # waitforlisten 2557506 /var/tmp/spdk-nbd.sock 00:05:40.677 00:39:33 -- common/autotest_common.sh@817 -- # '[' -z 2557506 ']' 00:05:40.677 00:39:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:40.677 00:39:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:40.677 00:39:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:40.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:40.677 00:39:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:40.677 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:40.677 00:39:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:40.677 00:39:33 -- common/autotest_common.sh@850 -- # return 0 00:05:40.677 00:39:33 -- event/event.sh@39 -- # killprocess 2557506 00:05:40.677 00:39:33 -- common/autotest_common.sh@936 -- # '[' -z 2557506 ']' 00:05:40.677 00:39:33 -- common/autotest_common.sh@940 -- # kill -0 2557506 00:05:40.677 00:39:33 -- common/autotest_common.sh@941 -- # uname 00:05:40.677 00:39:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:40.677 00:39:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2557506 00:05:40.677 00:39:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:40.677 00:39:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:40.677 00:39:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2557506' 00:05:40.677 killing process with pid 2557506 00:05:40.677 00:39:33 -- common/autotest_common.sh@955 -- # kill 2557506 00:05:40.677 00:39:33 -- common/autotest_common.sh@960 -- # wait 2557506 00:05:41.245 spdk_app_start is called in Round 0. 00:05:41.245 Shutdown signal received, stop current app iteration 00:05:41.245 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:41.245 spdk_app_start is called in Round 1. 00:05:41.245 Shutdown signal received, stop current app iteration 00:05:41.245 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:41.245 spdk_app_start is called in Round 2. 00:05:41.245 Shutdown signal received, stop current app iteration 00:05:41.245 Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 reinitialization... 00:05:41.245 spdk_app_start is called in Round 3. 00:05:41.245 Shutdown signal received, stop current app iteration 00:05:41.245 00:39:33 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:41.245 00:39:33 -- event/event.sh@42 -- # return 0 00:05:41.245 00:05:41.245 real 0m15.735s 00:05:41.245 user 0m33.011s 00:05:41.245 sys 0m2.106s 00:05:41.245 00:39:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:41.245 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:41.245 ************************************ 00:05:41.245 END TEST app_repeat 00:05:41.245 ************************************ 00:05:41.245 00:39:33 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:41.245 00:39:33 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.245 00:39:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.245 00:39:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.245 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:41.245 ************************************ 00:05:41.245 START TEST cpu_locks 00:05:41.245 ************************************ 00:05:41.245 00:39:33 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:41.245 * Looking for test storage... 00:05:41.245 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/event 00:05:41.245 00:39:33 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:41.245 00:39:33 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:41.245 00:39:33 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:41.245 00:39:33 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:41.245 00:39:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:41.245 00:39:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:41.245 00:39:33 -- common/autotest_common.sh@10 -- # set +x 00:05:41.505 ************************************ 00:05:41.505 START TEST default_locks 00:05:41.505 ************************************ 00:05:41.505 00:39:34 -- common/autotest_common.sh@1111 -- # default_locks 00:05:41.505 00:39:34 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2561026 00:05:41.505 00:39:34 -- event/cpu_locks.sh@47 -- # waitforlisten 2561026 00:05:41.505 00:39:34 -- common/autotest_common.sh@817 -- # '[' -z 2561026 ']' 00:05:41.505 00:39:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.505 00:39:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:41.505 00:39:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.505 00:39:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:41.505 00:39:34 -- common/autotest_common.sh@10 -- # set +x 00:05:41.505 00:39:34 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:41.505 [2024-04-27 00:39:34.138617] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:41.505 [2024-04-27 00:39:34.138750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561026 ] 00:05:41.766 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.766 [2024-04-27 00:39:34.266901] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.766 [2024-04-27 00:39:34.356776] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.336 00:39:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:42.336 00:39:34 -- common/autotest_common.sh@850 -- # return 0 00:05:42.336 00:39:34 -- event/cpu_locks.sh@49 -- # locks_exist 2561026 00:05:42.336 00:39:34 -- event/cpu_locks.sh@22 -- # lslocks -p 2561026 00:05:42.336 00:39:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:42.336 lslocks: write error 00:05:42.336 00:39:34 -- event/cpu_locks.sh@50 -- # killprocess 2561026 00:05:42.336 00:39:34 -- common/autotest_common.sh@936 -- # '[' -z 2561026 ']' 00:05:42.336 00:39:34 -- common/autotest_common.sh@940 -- # kill -0 2561026 00:05:42.336 00:39:34 -- common/autotest_common.sh@941 -- # uname 00:05:42.336 00:39:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:42.336 00:39:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2561026 00:05:42.336 00:39:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:42.336 00:39:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:42.336 00:39:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2561026' 00:05:42.336 killing process with pid 2561026 00:05:42.336 00:39:35 -- common/autotest_common.sh@955 -- # kill 2561026 00:05:42.336 00:39:35 -- common/autotest_common.sh@960 -- # wait 2561026 00:05:43.277 00:39:35 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2561026 00:05:43.277 00:39:35 -- common/autotest_common.sh@638 -- # local es=0 00:05:43.277 00:39:35 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2561026 00:05:43.277 00:39:35 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:43.277 00:39:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:43.277 00:39:35 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:43.277 00:39:35 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:43.277 00:39:35 -- common/autotest_common.sh@641 -- # waitforlisten 2561026 00:05:43.277 00:39:35 -- common/autotest_common.sh@817 -- # '[' -z 2561026 ']' 00:05:43.277 00:39:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.277 00:39:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.277 00:39:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.277 00:39:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.277 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.277 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2561026) - No such process 00:05:43.277 ERROR: process (pid: 2561026) is no longer running 00:05:43.277 00:39:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:43.277 00:39:35 -- common/autotest_common.sh@850 -- # return 1 00:05:43.277 00:39:35 -- common/autotest_common.sh@641 -- # es=1 00:05:43.277 00:39:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:43.277 00:39:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:43.277 00:39:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:43.277 00:39:35 -- event/cpu_locks.sh@54 -- # no_locks 00:05:43.277 00:39:35 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:43.277 00:39:35 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:43.277 00:39:35 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:43.277 00:05:43.277 real 0m1.858s 00:05:43.277 user 0m1.805s 00:05:43.277 sys 0m0.502s 00:05:43.277 00:39:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:43.277 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.277 ************************************ 00:05:43.277 END TEST default_locks 00:05:43.277 ************************************ 00:05:43.277 00:39:35 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:43.277 00:39:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:43.277 00:39:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:43.277 00:39:35 -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 ************************************ 00:05:43.537 START TEST default_locks_via_rpc 00:05:43.537 ************************************ 00:05:43.537 00:39:36 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:05:43.537 00:39:36 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2561374 00:05:43.537 00:39:36 -- event/cpu_locks.sh@63 -- # waitforlisten 2561374 00:05:43.537 00:39:36 -- common/autotest_common.sh@817 -- # '[' -z 2561374 ']' 00:05:43.537 00:39:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.537 00:39:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:43.537 00:39:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.537 00:39:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:43.537 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:43.537 00:39:36 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:43.537 [2024-04-27 00:39:36.096185] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:43.537 [2024-04-27 00:39:36.096290] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561374 ] 00:05:43.537 EAL: No free 2048 kB hugepages reported on node 1 00:05:43.537 [2024-04-27 00:39:36.207830] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.797 [2024-04-27 00:39:36.298145] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.369 00:39:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:44.369 00:39:36 -- common/autotest_common.sh@850 -- # return 0 00:05:44.369 00:39:36 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:44.369 00:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.369 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.369 00:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:44.369 00:39:36 -- event/cpu_locks.sh@67 -- # no_locks 00:05:44.369 00:39:36 -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.369 00:39:36 -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.369 00:39:36 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.369 00:39:36 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:44.369 00:39:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:05:44.369 00:39:36 -- common/autotest_common.sh@10 -- # set +x 00:05:44.369 00:39:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:05:44.369 00:39:36 -- event/cpu_locks.sh@71 -- # locks_exist 2561374 00:05:44.369 00:39:36 -- event/cpu_locks.sh@22 -- # lslocks -p 2561374 00:05:44.369 00:39:36 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.369 00:39:36 -- event/cpu_locks.sh@73 -- # killprocess 2561374 00:05:44.369 00:39:36 -- common/autotest_common.sh@936 -- # '[' -z 2561374 ']' 00:05:44.369 00:39:36 -- common/autotest_common.sh@940 -- # kill -0 2561374 00:05:44.369 00:39:36 -- common/autotest_common.sh@941 -- # uname 00:05:44.369 00:39:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:44.369 00:39:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2561374 00:05:44.369 00:39:36 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:44.369 00:39:36 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:44.369 00:39:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2561374' 00:05:44.369 killing process with pid 2561374 00:05:44.369 00:39:36 -- common/autotest_common.sh@955 -- # kill 2561374 00:05:44.369 00:39:36 -- common/autotest_common.sh@960 -- # wait 2561374 00:05:45.306 00:05:45.306 real 0m1.745s 00:05:45.306 user 0m1.670s 00:05:45.306 sys 0m0.430s 00:05:45.306 00:39:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:45.306 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.306 ************************************ 00:05:45.306 END TEST default_locks_via_rpc 00:05:45.306 ************************************ 00:05:45.306 00:39:37 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.306 00:39:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:45.306 00:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:45.306 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.306 ************************************ 00:05:45.306 START TEST non_locking_app_on_locked_coremask 00:05:45.306 ************************************ 00:05:45.306 00:39:37 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:05:45.306 00:39:37 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2561706 00:05:45.306 00:39:37 -- event/cpu_locks.sh@81 -- # waitforlisten 2561706 /var/tmp/spdk.sock 00:05:45.306 00:39:37 -- common/autotest_common.sh@817 -- # '[' -z 2561706 ']' 00:05:45.306 00:39:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.306 00:39:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:45.306 00:39:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.306 00:39:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:45.306 00:39:37 -- common/autotest_common.sh@10 -- # set +x 00:05:45.306 00:39:37 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:45.306 [2024-04-27 00:39:37.958770] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:45.306 [2024-04-27 00:39:37.958879] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2561706 ] 00:05:45.566 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.566 [2024-04-27 00:39:38.077262] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:45.566 [2024-04-27 00:39:38.171406] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.134 00:39:38 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:46.134 00:39:38 -- common/autotest_common.sh@850 -- # return 0 00:05:46.135 00:39:38 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.135 00:39:38 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2562002 00:05:46.135 00:39:38 -- event/cpu_locks.sh@85 -- # waitforlisten 2562002 /var/tmp/spdk2.sock 00:05:46.135 00:39:38 -- common/autotest_common.sh@817 -- # '[' -z 2562002 ']' 00:05:46.135 00:39:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.135 00:39:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:46.135 00:39:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.135 00:39:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:46.135 00:39:38 -- common/autotest_common.sh@10 -- # set +x 00:05:46.135 [2024-04-27 00:39:38.679804] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:46.135 [2024-04-27 00:39:38.679876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562002 ] 00:05:46.135 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.135 [2024-04-27 00:39:38.797854] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.135 [2024-04-27 00:39:38.797891] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.395 [2024-04-27 00:39:38.986421] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.334 00:39:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:47.334 00:39:39 -- common/autotest_common.sh@850 -- # return 0 00:05:47.334 00:39:39 -- event/cpu_locks.sh@87 -- # locks_exist 2561706 00:05:47.334 00:39:39 -- event/cpu_locks.sh@22 -- # lslocks -p 2561706 00:05:47.334 00:39:39 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.334 lslocks: write error 00:05:47.334 00:39:39 -- event/cpu_locks.sh@89 -- # killprocess 2561706 00:05:47.334 00:39:39 -- common/autotest_common.sh@936 -- # '[' -z 2561706 ']' 00:05:47.334 00:39:39 -- common/autotest_common.sh@940 -- # kill -0 2561706 00:05:47.334 00:39:39 -- common/autotest_common.sh@941 -- # uname 00:05:47.334 00:39:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:47.334 00:39:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2561706 00:05:47.594 00:39:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:47.594 00:39:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:47.594 00:39:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2561706' 00:05:47.594 killing process with pid 2561706 00:05:47.594 00:39:40 -- common/autotest_common.sh@955 -- # kill 2561706 00:05:47.594 00:39:40 -- common/autotest_common.sh@960 -- # wait 2561706 00:05:48.973 00:39:41 -- event/cpu_locks.sh@90 -- # killprocess 2562002 00:05:48.973 00:39:41 -- common/autotest_common.sh@936 -- # '[' -z 2562002 ']' 00:05:48.973 00:39:41 -- common/autotest_common.sh@940 -- # kill -0 2562002 00:05:48.973 00:39:41 -- common/autotest_common.sh@941 -- # uname 00:05:48.973 00:39:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:48.973 00:39:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2562002 00:05:49.234 00:39:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:49.234 00:39:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:49.234 00:39:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2562002' 00:05:49.234 killing process with pid 2562002 00:05:49.234 00:39:41 -- common/autotest_common.sh@955 -- # kill 2562002 00:05:49.234 00:39:41 -- common/autotest_common.sh@960 -- # wait 2562002 00:05:50.175 00:05:50.175 real 0m4.688s 00:05:50.175 user 0m4.704s 00:05:50.175 sys 0m0.932s 00:05:50.175 00:39:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:50.175 00:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 ************************************ 00:05:50.175 END TEST non_locking_app_on_locked_coremask 00:05:50.175 ************************************ 00:05:50.175 00:39:42 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:50.175 00:39:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:50.175 00:39:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:50.175 00:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 ************************************ 00:05:50.175 START TEST locking_app_on_unlocked_coremask 00:05:50.175 ************************************ 00:05:50.175 00:39:42 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:05:50.175 00:39:42 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2562669 00:05:50.175 00:39:42 -- event/cpu_locks.sh@99 -- # waitforlisten 2562669 /var/tmp/spdk.sock 00:05:50.175 00:39:42 -- common/autotest_common.sh@817 -- # '[' -z 2562669 ']' 00:05:50.175 00:39:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.175 00:39:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:50.175 00:39:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.175 00:39:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:50.175 00:39:42 -- common/autotest_common.sh@10 -- # set +x 00:05:50.175 00:39:42 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:50.175 [2024-04-27 00:39:42.770064] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:50.175 [2024-04-27 00:39:42.770169] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562669 ] 00:05:50.175 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.444 [2024-04-27 00:39:42.888836] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:50.445 [2024-04-27 00:39:42.888868] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.445 [2024-04-27 00:39:42.980859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.016 00:39:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:51.016 00:39:43 -- common/autotest_common.sh@850 -- # return 0 00:05:51.016 00:39:43 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2562951 00:05:51.016 00:39:43 -- event/cpu_locks.sh@103 -- # waitforlisten 2562951 /var/tmp/spdk2.sock 00:05:51.016 00:39:43 -- common/autotest_common.sh@817 -- # '[' -z 2562951 ']' 00:05:51.016 00:39:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:51.016 00:39:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:51.016 00:39:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:51.016 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:51.016 00:39:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:51.016 00:39:43 -- common/autotest_common.sh@10 -- # set +x 00:05:51.016 00:39:43 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:51.016 [2024-04-27 00:39:43.529836] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:51.016 [2024-04-27 00:39:43.529952] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2562951 ] 00:05:51.016 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.016 [2024-04-27 00:39:43.679996] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.277 [2024-04-27 00:39:43.863350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.217 00:39:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:52.217 00:39:44 -- common/autotest_common.sh@850 -- # return 0 00:05:52.217 00:39:44 -- event/cpu_locks.sh@105 -- # locks_exist 2562951 00:05:52.217 00:39:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:52.217 00:39:44 -- event/cpu_locks.sh@22 -- # lslocks -p 2562951 00:05:52.478 lslocks: write error 00:05:52.478 00:39:44 -- event/cpu_locks.sh@107 -- # killprocess 2562669 00:05:52.478 00:39:44 -- common/autotest_common.sh@936 -- # '[' -z 2562669 ']' 00:05:52.478 00:39:44 -- common/autotest_common.sh@940 -- # kill -0 2562669 00:05:52.478 00:39:44 -- common/autotest_common.sh@941 -- # uname 00:05:52.478 00:39:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:52.478 00:39:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2562669 00:05:52.478 00:39:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:52.478 00:39:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:52.478 00:39:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2562669' 00:05:52.478 killing process with pid 2562669 00:05:52.478 00:39:44 -- common/autotest_common.sh@955 -- # kill 2562669 00:05:52.478 00:39:44 -- common/autotest_common.sh@960 -- # wait 2562669 00:05:54.450 00:39:46 -- event/cpu_locks.sh@108 -- # killprocess 2562951 00:05:54.450 00:39:46 -- common/autotest_common.sh@936 -- # '[' -z 2562951 ']' 00:05:54.450 00:39:46 -- common/autotest_common.sh@940 -- # kill -0 2562951 00:05:54.450 00:39:46 -- common/autotest_common.sh@941 -- # uname 00:05:54.450 00:39:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:54.450 00:39:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2562951 00:05:54.450 00:39:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:54.450 00:39:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:54.450 00:39:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2562951' 00:05:54.450 killing process with pid 2562951 00:05:54.450 00:39:46 -- common/autotest_common.sh@955 -- # kill 2562951 00:05:54.450 00:39:46 -- common/autotest_common.sh@960 -- # wait 2562951 00:05:55.022 00:05:55.022 real 0m4.917s 00:05:55.022 user 0m4.981s 00:05:55.022 sys 0m0.982s 00:05:55.022 00:39:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:55.022 00:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.022 ************************************ 00:05:55.022 END TEST locking_app_on_unlocked_coremask 00:05:55.022 ************************************ 00:05:55.022 00:39:47 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:55.022 00:39:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:55.022 00:39:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:55.022 00:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.283 ************************************ 00:05:55.284 START TEST locking_app_on_locked_coremask 00:05:55.284 ************************************ 00:05:55.284 00:39:47 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:05:55.284 00:39:47 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2563801 00:05:55.284 00:39:47 -- event/cpu_locks.sh@116 -- # waitforlisten 2563801 /var/tmp/spdk.sock 00:05:55.284 00:39:47 -- common/autotest_common.sh@817 -- # '[' -z 2563801 ']' 00:05:55.284 00:39:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:55.284 00:39:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.284 00:39:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:55.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:55.284 00:39:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.284 00:39:47 -- common/autotest_common.sh@10 -- # set +x 00:05:55.284 00:39:47 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:55.284 [2024-04-27 00:39:47.813484] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:55.284 [2024-04-27 00:39:47.813586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563801 ] 00:05:55.284 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.284 [2024-04-27 00:39:47.923124] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.544 [2024-04-27 00:39:48.014410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.805 00:39:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:55.805 00:39:48 -- common/autotest_common.sh@850 -- # return 0 00:05:55.805 00:39:48 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2563900 00:05:55.805 00:39:48 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2563900 /var/tmp/spdk2.sock 00:05:55.805 00:39:48 -- common/autotest_common.sh@638 -- # local es=0 00:05:55.805 00:39:48 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2563900 /var/tmp/spdk2.sock 00:05:55.805 00:39:48 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:55.805 00:39:48 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:55.805 00:39:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.805 00:39:48 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:55.805 00:39:48 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:55.805 00:39:48 -- common/autotest_common.sh@641 -- # waitforlisten 2563900 /var/tmp/spdk2.sock 00:05:55.805 00:39:48 -- common/autotest_common.sh@817 -- # '[' -z 2563900 ']' 00:05:55.805 00:39:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.805 00:39:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:55.805 00:39:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.805 00:39:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:55.805 00:39:48 -- common/autotest_common.sh@10 -- # set +x 00:05:56.066 [2024-04-27 00:39:48.577612] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:56.066 [2024-04-27 00:39:48.577727] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2563900 ] 00:05:56.066 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.066 [2024-04-27 00:39:48.731240] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2563801 has claimed it. 00:05:56.066 [2024-04-27 00:39:48.731289] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:56.635 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2563900) - No such process 00:05:56.635 ERROR: process (pid: 2563900) is no longer running 00:05:56.635 00:39:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:56.635 00:39:49 -- common/autotest_common.sh@850 -- # return 1 00:05:56.635 00:39:49 -- common/autotest_common.sh@641 -- # es=1 00:05:56.635 00:39:49 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:56.635 00:39:49 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:56.635 00:39:49 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:56.635 00:39:49 -- event/cpu_locks.sh@122 -- # locks_exist 2563801 00:05:56.635 00:39:49 -- event/cpu_locks.sh@22 -- # lslocks -p 2563801 00:05:56.635 00:39:49 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:56.635 lslocks: write error 00:05:56.636 00:39:49 -- event/cpu_locks.sh@124 -- # killprocess 2563801 00:05:56.636 00:39:49 -- common/autotest_common.sh@936 -- # '[' -z 2563801 ']' 00:05:56.636 00:39:49 -- common/autotest_common.sh@940 -- # kill -0 2563801 00:05:56.896 00:39:49 -- common/autotest_common.sh@941 -- # uname 00:05:56.896 00:39:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:56.896 00:39:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2563801 00:05:56.896 00:39:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:56.896 00:39:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:56.896 00:39:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2563801' 00:05:56.896 killing process with pid 2563801 00:05:56.896 00:39:49 -- common/autotest_common.sh@955 -- # kill 2563801 00:05:56.896 00:39:49 -- common/autotest_common.sh@960 -- # wait 2563801 00:05:57.839 00:05:57.839 real 0m2.502s 00:05:57.839 user 0m2.563s 00:05:57.839 sys 0m0.656s 00:05:57.839 00:39:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:05:57.839 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.839 ************************************ 00:05:57.839 END TEST locking_app_on_locked_coremask 00:05:57.839 ************************************ 00:05:57.839 00:39:50 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:57.839 00:39:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:57.839 00:39:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:57.839 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.839 ************************************ 00:05:57.839 START TEST locking_overlapped_coremask 00:05:57.839 ************************************ 00:05:57.839 00:39:50 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:05:57.839 00:39:50 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2564243 00:05:57.839 00:39:50 -- event/cpu_locks.sh@133 -- # waitforlisten 2564243 /var/tmp/spdk.sock 00:05:57.839 00:39:50 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:57.839 00:39:50 -- common/autotest_common.sh@817 -- # '[' -z 2564243 ']' 00:05:57.839 00:39:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:57.839 00:39:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:57.839 00:39:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:57.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:57.839 00:39:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:57.839 00:39:50 -- common/autotest_common.sh@10 -- # set +x 00:05:57.839 [2024-04-27 00:39:50.427388] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:57.839 [2024-04-27 00:39:50.427493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564243 ] 00:05:57.839 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.099 [2024-04-27 00:39:50.549692] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:58.099 [2024-04-27 00:39:50.643186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.099 [2024-04-27 00:39:50.643282] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.099 [2024-04-27 00:39:50.643287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.669 00:39:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:58.670 00:39:51 -- common/autotest_common.sh@850 -- # return 0 00:05:58.670 00:39:51 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2564535 00:05:58.670 00:39:51 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2564535 /var/tmp/spdk2.sock 00:05:58.670 00:39:51 -- common/autotest_common.sh@638 -- # local es=0 00:05:58.670 00:39:51 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 2564535 /var/tmp/spdk2.sock 00:05:58.670 00:39:51 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:05:58.670 00:39:51 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:58.670 00:39:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.670 00:39:51 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:05:58.670 00:39:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:05:58.670 00:39:51 -- common/autotest_common.sh@641 -- # waitforlisten 2564535 /var/tmp/spdk2.sock 00:05:58.670 00:39:51 -- common/autotest_common.sh@817 -- # '[' -z 2564535 ']' 00:05:58.670 00:39:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.670 00:39:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:05:58.670 00:39:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.670 00:39:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:05:58.670 00:39:51 -- common/autotest_common.sh@10 -- # set +x 00:05:58.670 [2024-04-27 00:39:51.217566] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:05:58.670 [2024-04-27 00:39:51.217715] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564535 ] 00:05:58.670 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.930 [2024-04-27 00:39:51.382759] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2564243 has claimed it. 00:05:58.930 [2024-04-27 00:39:51.382808] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:59.191 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 832: kill: (2564535) - No such process 00:05:59.191 ERROR: process (pid: 2564535) is no longer running 00:05:59.191 00:39:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:05:59.191 00:39:51 -- common/autotest_common.sh@850 -- # return 1 00:05:59.191 00:39:51 -- common/autotest_common.sh@641 -- # es=1 00:05:59.191 00:39:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:05:59.191 00:39:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:05:59.191 00:39:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:05:59.191 00:39:51 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:59.191 00:39:51 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:59.191 00:39:51 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:59.191 00:39:51 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:59.191 00:39:51 -- event/cpu_locks.sh@141 -- # killprocess 2564243 00:05:59.191 00:39:51 -- common/autotest_common.sh@936 -- # '[' -z 2564243 ']' 00:05:59.191 00:39:51 -- common/autotest_common.sh@940 -- # kill -0 2564243 00:05:59.191 00:39:51 -- common/autotest_common.sh@941 -- # uname 00:05:59.191 00:39:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.191 00:39:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2564243 00:05:59.191 00:39:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.191 00:39:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.191 00:39:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2564243' 00:05:59.191 killing process with pid 2564243 00:05:59.191 00:39:51 -- common/autotest_common.sh@955 -- # kill 2564243 00:05:59.191 00:39:51 -- common/autotest_common.sh@960 -- # wait 2564243 00:06:00.131 00:06:00.131 real 0m2.304s 00:06:00.131 user 0m5.971s 00:06:00.131 sys 0m0.591s 00:06:00.131 00:39:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:00.131 00:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.131 ************************************ 00:06:00.131 END TEST locking_overlapped_coremask 00:06:00.131 ************************************ 00:06:00.131 00:39:52 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:00.131 00:39:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:00.131 00:39:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:00.131 00:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.131 ************************************ 00:06:00.131 START TEST locking_overlapped_coremask_via_rpc 00:06:00.131 ************************************ 00:06:00.131 00:39:52 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:06:00.131 00:39:52 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2564873 00:06:00.131 00:39:52 -- event/cpu_locks.sh@149 -- # waitforlisten 2564873 /var/tmp/spdk.sock 00:06:00.131 00:39:52 -- common/autotest_common.sh@817 -- # '[' -z 2564873 ']' 00:06:00.131 00:39:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.131 00:39:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.131 00:39:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.131 00:39:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.131 00:39:52 -- common/autotest_common.sh@10 -- # set +x 00:06:00.131 00:39:52 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:00.392 [2024-04-27 00:39:52.898663] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:00.392 [2024-04-27 00:39:52.898796] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564873 ] 00:06:00.392 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.392 [2024-04-27 00:39:53.030822] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:00.392 [2024-04-27 00:39:53.030865] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:00.653 [2024-04-27 00:39:53.123616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.653 [2024-04-27 00:39:53.123696] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.653 [2024-04-27 00:39:53.123703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:00.915 00:39:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:00.915 00:39:53 -- common/autotest_common.sh@850 -- # return 0 00:06:00.915 00:39:53 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2564907 00:06:00.915 00:39:53 -- event/cpu_locks.sh@153 -- # waitforlisten 2564907 /var/tmp/spdk2.sock 00:06:00.915 00:39:53 -- common/autotest_common.sh@817 -- # '[' -z 2564907 ']' 00:06:00.915 00:39:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:00.915 00:39:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:00.915 00:39:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:00.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:00.915 00:39:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:00.915 00:39:53 -- common/autotest_common.sh@10 -- # set +x 00:06:00.915 00:39:53 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:01.175 [2024-04-27 00:39:53.707943] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:01.175 [2024-04-27 00:39:53.708088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2564907 ] 00:06:01.175 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.436 [2024-04-27 00:39:53.874935] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:01.436 [2024-04-27 00:39:53.874978] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:01.436 [2024-04-27 00:39:54.065041] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:01.436 [2024-04-27 00:39:54.068288] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:01.436 [2024-04-27 00:39:54.068318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:06:02.379 00:39:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.379 00:39:54 -- common/autotest_common.sh@850 -- # return 0 00:06:02.379 00:39:54 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:02.379 00:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.379 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 00:39:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:02.379 00:39:54 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.379 00:39:54 -- common/autotest_common.sh@638 -- # local es=0 00:06:02.379 00:39:54 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.379 00:39:54 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:06:02.379 00:39:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.379 00:39:54 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:06:02.379 00:39:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:02.379 00:39:54 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:02.379 00:39:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:02.379 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 [2024-04-27 00:39:54.762334] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2564873 has claimed it. 00:06:02.379 request: 00:06:02.379 { 00:06:02.379 "method": "framework_enable_cpumask_locks", 00:06:02.379 "req_id": 1 00:06:02.379 } 00:06:02.379 Got JSON-RPC error response 00:06:02.379 response: 00:06:02.379 { 00:06:02.379 "code": -32603, 00:06:02.379 "message": "Failed to claim CPU core: 2" 00:06:02.379 } 00:06:02.379 00:39:54 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:06:02.379 00:39:54 -- common/autotest_common.sh@641 -- # es=1 00:06:02.379 00:39:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:02.379 00:39:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:02.379 00:39:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:02.379 00:39:54 -- event/cpu_locks.sh@158 -- # waitforlisten 2564873 /var/tmp/spdk.sock 00:06:02.379 00:39:54 -- common/autotest_common.sh@817 -- # '[' -z 2564873 ']' 00:06:02.379 00:39:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.379 00:39:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.379 00:39:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.379 00:39:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.379 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.379 00:39:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.379 00:39:54 -- common/autotest_common.sh@850 -- # return 0 00:06:02.379 00:39:54 -- event/cpu_locks.sh@159 -- # waitforlisten 2564907 /var/tmp/spdk2.sock 00:06:02.379 00:39:54 -- common/autotest_common.sh@817 -- # '[' -z 2564907 ']' 00:06:02.379 00:39:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:02.379 00:39:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:02.379 00:39:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:02.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:02.379 00:39:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:02.379 00:39:54 -- common/autotest_common.sh@10 -- # set +x 00:06:02.639 00:39:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:02.639 00:39:55 -- common/autotest_common.sh@850 -- # return 0 00:06:02.639 00:39:55 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:02.639 00:39:55 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:02.639 00:39:55 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:02.639 00:39:55 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:02.639 00:06:02.639 real 0m2.299s 00:06:02.639 user 0m0.727s 00:06:02.639 sys 0m0.175s 00:06:02.639 00:39:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:02.639 00:39:55 -- common/autotest_common.sh@10 -- # set +x 00:06:02.639 ************************************ 00:06:02.639 END TEST locking_overlapped_coremask_via_rpc 00:06:02.639 ************************************ 00:06:02.639 00:39:55 -- event/cpu_locks.sh@174 -- # cleanup 00:06:02.639 00:39:55 -- event/cpu_locks.sh@15 -- # [[ -z 2564873 ]] 00:06:02.639 00:39:55 -- event/cpu_locks.sh@15 -- # killprocess 2564873 00:06:02.639 00:39:55 -- common/autotest_common.sh@936 -- # '[' -z 2564873 ']' 00:06:02.639 00:39:55 -- common/autotest_common.sh@940 -- # kill -0 2564873 00:06:02.639 00:39:55 -- common/autotest_common.sh@941 -- # uname 00:06:02.639 00:39:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:02.639 00:39:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2564873 00:06:02.639 00:39:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:02.639 00:39:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:02.639 00:39:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2564873' 00:06:02.639 killing process with pid 2564873 00:06:02.639 00:39:55 -- common/autotest_common.sh@955 -- # kill 2564873 00:06:02.639 00:39:55 -- common/autotest_common.sh@960 -- # wait 2564873 00:06:03.579 00:39:56 -- event/cpu_locks.sh@16 -- # [[ -z 2564907 ]] 00:06:03.579 00:39:56 -- event/cpu_locks.sh@16 -- # killprocess 2564907 00:06:03.579 00:39:56 -- common/autotest_common.sh@936 -- # '[' -z 2564907 ']' 00:06:03.579 00:39:56 -- common/autotest_common.sh@940 -- # kill -0 2564907 00:06:03.579 00:39:56 -- common/autotest_common.sh@941 -- # uname 00:06:03.579 00:39:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:03.579 00:39:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2564907 00:06:03.580 00:39:56 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:03.580 00:39:56 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:03.580 00:39:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2564907' 00:06:03.580 killing process with pid 2564907 00:06:03.580 00:39:56 -- common/autotest_common.sh@955 -- # kill 2564907 00:06:03.580 00:39:56 -- common/autotest_common.sh@960 -- # wait 2564907 00:06:04.519 00:39:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.519 00:39:56 -- event/cpu_locks.sh@1 -- # cleanup 00:06:04.519 00:39:56 -- event/cpu_locks.sh@15 -- # [[ -z 2564873 ]] 00:06:04.519 00:39:56 -- event/cpu_locks.sh@15 -- # killprocess 2564873 00:06:04.519 00:39:56 -- common/autotest_common.sh@936 -- # '[' -z 2564873 ']' 00:06:04.519 00:39:56 -- common/autotest_common.sh@940 -- # kill -0 2564873 00:06:04.519 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2564873) - No such process 00:06:04.519 00:39:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2564873 is not found' 00:06:04.519 Process with pid 2564873 is not found 00:06:04.519 00:39:56 -- event/cpu_locks.sh@16 -- # [[ -z 2564907 ]] 00:06:04.519 00:39:56 -- event/cpu_locks.sh@16 -- # killprocess 2564907 00:06:04.519 00:39:56 -- common/autotest_common.sh@936 -- # '[' -z 2564907 ']' 00:06:04.519 00:39:56 -- common/autotest_common.sh@940 -- # kill -0 2564907 00:06:04.519 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2564907) - No such process 00:06:04.519 00:39:56 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2564907 is not found' 00:06:04.519 Process with pid 2564907 is not found 00:06:04.519 00:39:56 -- event/cpu_locks.sh@18 -- # rm -f 00:06:04.519 00:06:04.519 real 0m23.064s 00:06:04.519 user 0m37.284s 00:06:04.519 sys 0m5.548s 00:06:04.519 00:39:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.519 00:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 END TEST cpu_locks 00:06:04.519 ************************************ 00:06:04.519 00:06:04.519 real 0m46.550s 00:06:04.519 user 1m22.833s 00:06:04.519 sys 0m8.965s 00:06:04.519 00:39:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:04.519 00:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 END TEST event 00:06:04.519 ************************************ 00:06:04.519 00:39:56 -- spdk/autotest.sh@178 -- # run_test thread /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:06:04.519 00:39:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:04.519 00:39:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.519 00:39:56 -- common/autotest_common.sh@10 -- # set +x 00:06:04.519 ************************************ 00:06:04.519 START TEST thread 00:06:04.519 ************************************ 00:06:04.519 00:39:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/thread.sh 00:06:04.519 * Looking for test storage... 00:06:04.519 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread 00:06:04.519 00:39:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.519 00:39:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:04.519 00:39:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:04.519 00:39:57 -- common/autotest_common.sh@10 -- # set +x 00:06:04.779 ************************************ 00:06:04.779 START TEST thread_poller_perf 00:06:04.779 ************************************ 00:06:04.779 00:39:57 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:04.779 [2024-04-27 00:39:57.266436] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:04.779 [2024-04-27 00:39:57.266549] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2565867 ] 00:06:04.779 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.779 [2024-04-27 00:39:57.385594] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.039 [2024-04-27 00:39:57.476253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.039 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:05.981 ====================================== 00:06:05.981 busy:1908097354 (cyc) 00:06:05.981 total_run_count: 405000 00:06:05.981 tsc_hz: 1900000000 (cyc) 00:06:05.981 ====================================== 00:06:05.981 poller_cost: 4711 (cyc), 2479 (nsec) 00:06:05.981 00:06:05.981 real 0m1.395s 00:06:05.981 user 0m1.260s 00:06:05.981 sys 0m0.128s 00:06:05.981 00:39:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:05.981 00:39:58 -- common/autotest_common.sh@10 -- # set +x 00:06:05.981 ************************************ 00:06:05.981 END TEST thread_poller_perf 00:06:05.981 ************************************ 00:06:05.981 00:39:58 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:05.981 00:39:58 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:06:05.981 00:39:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:05.981 00:39:58 -- common/autotest_common.sh@10 -- # set +x 00:06:06.241 ************************************ 00:06:06.241 START TEST thread_poller_perf 00:06:06.241 ************************************ 00:06:06.241 00:39:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:06.241 [2024-04-27 00:39:58.796278] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:06.241 [2024-04-27 00:39:58.796383] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566180 ] 00:06:06.241 EAL: No free 2048 kB hugepages reported on node 1 00:06:06.241 [2024-04-27 00:39:58.912540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.502 [2024-04-27 00:39:59.002229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.502 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:07.886 ====================================== 00:06:07.886 busy:1901776756 (cyc) 00:06:07.886 total_run_count: 5317000 00:06:07.886 tsc_hz: 1900000000 (cyc) 00:06:07.886 ====================================== 00:06:07.886 poller_cost: 357 (cyc), 187 (nsec) 00:06:07.886 00:06:07.886 real 0m1.411s 00:06:07.886 user 0m1.287s 00:06:07.886 sys 0m0.117s 00:06:07.886 00:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.886 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.886 ************************************ 00:06:07.886 END TEST thread_poller_perf 00:06:07.886 ************************************ 00:06:07.886 00:40:00 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:07.886 00:06:07.886 real 0m3.144s 00:06:07.886 user 0m2.648s 00:06:07.886 sys 0m0.472s 00:06:07.886 00:40:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:07.886 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.886 ************************************ 00:06:07.886 END TEST thread 00:06:07.886 ************************************ 00:06:07.886 00:40:00 -- spdk/autotest.sh@179 -- # run_test accel /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:06:07.886 00:40:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:07.886 00:40:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:07.886 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.886 ************************************ 00:06:07.886 START TEST accel 00:06:07.886 ************************************ 00:06:07.886 00:40:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel.sh 00:06:07.886 * Looking for test storage... 00:06:07.886 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:06:07.886 00:40:00 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:07.886 00:40:00 -- accel/accel.sh@82 -- # get_expected_opcs 00:06:07.886 00:40:00 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:07.886 00:40:00 -- accel/accel.sh@62 -- # spdk_tgt_pid=2566582 00:06:07.886 00:40:00 -- accel/accel.sh@63 -- # waitforlisten 2566582 00:06:07.886 00:40:00 -- common/autotest_common.sh@817 -- # '[' -z 2566582 ']' 00:06:07.886 00:40:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:07.886 00:40:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:06:07.886 00:40:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:07.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:07.886 00:40:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:06:07.886 00:40:00 -- common/autotest_common.sh@10 -- # set +x 00:06:07.886 00:40:00 -- accel/accel.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:07.886 00:40:00 -- accel/accel.sh@61 -- # build_accel_config 00:06:07.886 00:40:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.886 00:40:00 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:07.886 00:40:00 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:07.886 00:40:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:07.886 00:40:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:07.886 00:40:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.886 00:40:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.886 00:40:00 -- accel/accel.sh@40 -- # local IFS=, 00:06:07.886 00:40:00 -- accel/accel.sh@41 -- # jq -r . 00:06:07.886 [2024-04-27 00:40:00.516632] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:07.886 [2024-04-27 00:40:00.516773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2566582 ] 00:06:08.147 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.147 [2024-04-27 00:40:00.646039] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.147 [2024-04-27 00:40:00.738763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.147 [2024-04-27 00:40:00.743333] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:08.147 [2024-04-27 00:40:00.751282] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:18.143 00:40:09 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:06:18.143 00:40:09 -- common/autotest_common.sh@850 -- # return 0 00:06:18.143 00:40:09 -- accel/accel.sh@65 -- # [[ 1 -gt 0 ]] 00:06:18.143 00:40:09 -- accel/accel.sh@65 -- # check_save_config dsa_scan_accel_module 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:06:18.143 00:40:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # grep dsa_scan_accel_module 00:06:18.143 00:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 00:40:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.143 "method": "dsa_scan_accel_module", 00:06:18.143 00:40:09 -- accel/accel.sh@66 -- # [[ 1 -gt 0 ]] 00:06:18.143 00:40:09 -- accel/accel.sh@66 -- # check_save_config iaa_scan_accel_module 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # rpc_cmd save_config 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # jq -r '.subsystems[] | select(.subsystem=="accel").config[]' 00:06:18.143 00:40:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.143 00:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 00:40:09 -- accel/accel.sh@56 -- # grep iaa_scan_accel_module 00:06:18.143 00:40:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.143 "method": "iaa_scan_accel_module" 00:06:18.143 00:40:09 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:18.143 00:40:09 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:18.143 00:40:09 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:18.143 00:40:09 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:18.143 00:40:09 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:18.143 00:40:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:06:18.143 00:40:09 -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 00:40:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=iaa 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.143 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.143 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.143 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.144 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.144 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.144 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.144 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.144 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:18.144 00:40:09 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:18.144 00:40:09 -- accel/accel.sh@72 -- # IFS== 00:06:18.144 00:40:09 -- accel/accel.sh@72 -- # read -r opc module 00:06:18.144 00:40:09 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=dsa 00:06:18.144 00:40:09 -- accel/accel.sh@75 -- # killprocess 2566582 00:06:18.144 00:40:09 -- common/autotest_common.sh@936 -- # '[' -z 2566582 ']' 00:06:18.144 00:40:09 -- common/autotest_common.sh@940 -- # kill -0 2566582 00:06:18.144 00:40:09 -- common/autotest_common.sh@941 -- # uname 00:06:18.144 00:40:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:18.144 00:40:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2566582 00:06:18.144 00:40:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:18.144 00:40:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:18.144 00:40:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2566582' 00:06:18.144 killing process with pid 2566582 00:06:18.144 00:40:10 -- common/autotest_common.sh@955 -- # kill 2566582 00:06:18.144 00:40:10 -- common/autotest_common.sh@960 -- # wait 2566582 00:06:21.484 00:40:13 -- accel/accel.sh@76 -- # trap - ERR 00:06:21.484 00:40:13 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:21.484 00:40:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:06:21.484 00:40:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.484 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.484 00:40:13 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:06:21.484 00:40:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:21.484 00:40:13 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.484 00:40:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.484 00:40:13 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:21.484 00:40:13 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:21.484 00:40:13 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:21.484 00:40:13 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:21.484 00:40:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.484 00:40:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.484 00:40:13 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.484 00:40:13 -- accel/accel.sh@41 -- # jq -r . 00:06:21.484 00:40:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:21.484 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.484 00:40:13 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:21.484 00:40:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:21.484 00:40:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:21.484 00:40:13 -- common/autotest_common.sh@10 -- # set +x 00:06:21.484 ************************************ 00:06:21.484 START TEST accel_missing_filename 00:06:21.484 ************************************ 00:06:21.484 00:40:14 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:06:21.484 00:40:14 -- common/autotest_common.sh@638 -- # local es=0 00:06:21.484 00:40:14 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:21.484 00:40:14 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:21.484 00:40:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.484 00:40:14 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:21.484 00:40:14 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:21.484 00:40:14 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:06:21.484 00:40:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:21.484 00:40:14 -- accel/accel.sh@12 -- # build_accel_config 00:06:21.484 00:40:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.484 00:40:14 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:21.484 00:40:14 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:21.484 00:40:14 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:21.484 00:40:14 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:21.484 00:40:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.484 00:40:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.484 00:40:14 -- accel/accel.sh@40 -- # local IFS=, 00:06:21.484 00:40:14 -- accel/accel.sh@41 -- # jq -r . 00:06:21.484 [2024-04-27 00:40:14.069659] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:21.484 [2024-04-27 00:40:14.069790] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2569339 ] 00:06:21.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.744 [2024-04-27 00:40:14.202452] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.744 [2024-04-27 00:40:14.292140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.744 [2024-04-27 00:40:14.296684] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:21.744 [2024-04-27 00:40:14.304643] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:28.328 [2024-04-27 00:40:20.677908] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:30.240 [2024-04-27 00:40:22.538410] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:30.240 A filename is required. 00:06:30.240 00:40:22 -- common/autotest_common.sh@641 -- # es=234 00:06:30.240 00:40:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:30.240 00:40:22 -- common/autotest_common.sh@650 -- # es=106 00:06:30.240 00:40:22 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:30.240 00:40:22 -- common/autotest_common.sh@658 -- # es=1 00:06:30.240 00:40:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:30.240 00:06:30.240 real 0m8.681s 00:06:30.240 user 0m2.299s 00:06:30.240 sys 0m0.256s 00:06:30.240 00:40:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:30.240 00:40:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.240 ************************************ 00:06:30.240 END TEST accel_missing_filename 00:06:30.240 ************************************ 00:06:30.240 00:40:22 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:30.240 00:40:22 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:30.240 00:40:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:30.240 00:40:22 -- common/autotest_common.sh@10 -- # set +x 00:06:30.240 ************************************ 00:06:30.240 START TEST accel_compress_verify 00:06:30.240 ************************************ 00:06:30.240 00:40:22 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:30.240 00:40:22 -- common/autotest_common.sh@638 -- # local es=0 00:06:30.240 00:40:22 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:30.240 00:40:22 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:30.240 00:40:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.240 00:40:22 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:30.240 00:40:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:30.240 00:40:22 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:30.240 00:40:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:06:30.240 00:40:22 -- accel/accel.sh@12 -- # build_accel_config 00:06:30.240 00:40:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:30.240 00:40:22 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:30.240 00:40:22 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:30.240 00:40:22 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:30.240 00:40:22 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:30.240 00:40:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:30.240 00:40:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:30.240 00:40:22 -- accel/accel.sh@40 -- # local IFS=, 00:06:30.240 00:40:22 -- accel/accel.sh@41 -- # jq -r . 00:06:30.240 [2024-04-27 00:40:22.903472] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:30.240 [2024-04-27 00:40:22.903604] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2570920 ] 00:06:30.500 EAL: No free 2048 kB hugepages reported on node 1 00:06:30.500 [2024-04-27 00:40:23.038210] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.500 [2024-04-27 00:40:23.129804] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:30.500 [2024-04-27 00:40:23.134365] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:30.501 [2024-04-27 00:40:23.142317] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:37.078 [2024-04-27 00:40:29.541819] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:38.987 [2024-04-27 00:40:31.397380] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:06:38.987 00:06:38.987 Compression does not support the verify option, aborting. 00:06:38.987 00:40:31 -- common/autotest_common.sh@641 -- # es=161 00:06:38.987 00:40:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:38.987 00:40:31 -- common/autotest_common.sh@650 -- # es=33 00:06:38.987 00:40:31 -- common/autotest_common.sh@651 -- # case "$es" in 00:06:38.987 00:40:31 -- common/autotest_common.sh@658 -- # es=1 00:06:38.987 00:40:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:38.987 00:06:38.987 real 0m8.707s 00:06:38.987 user 0m2.313s 00:06:38.987 sys 0m0.258s 00:06:38.987 00:40:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:38.987 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:38.987 ************************************ 00:06:38.987 END TEST accel_compress_verify 00:06:38.987 ************************************ 00:06:38.987 00:40:31 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:38.987 00:40:31 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:38.987 00:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.987 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.245 ************************************ 00:06:39.245 START TEST accel_wrong_workload 00:06:39.245 ************************************ 00:06:39.245 00:40:31 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:06:39.245 00:40:31 -- common/autotest_common.sh@638 -- # local es=0 00:06:39.245 00:40:31 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:39.245 00:40:31 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:39.245 00:40:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.246 00:40:31 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:39.246 00:40:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.246 00:40:31 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:06:39.246 00:40:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:39.246 00:40:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.246 00:40:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.246 00:40:31 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:39.246 00:40:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:39.246 00:40:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.246 00:40:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.246 00:40:31 -- accel/accel.sh@41 -- # jq -r . 00:06:39.246 Unsupported workload type: foobar 00:06:39.246 [2024-04-27 00:40:31.722839] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:39.246 accel_perf options: 00:06:39.246 [-h help message] 00:06:39.246 [-q queue depth per core] 00:06:39.246 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.246 [-T number of threads per core 00:06:39.246 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.246 [-t time in seconds] 00:06:39.246 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.246 [ dif_verify, , dif_generate, dif_generate_copy 00:06:39.246 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.246 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.246 [-S for crc32c workload, use this seed value (default 0) 00:06:39.246 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.246 [-f for fill workload, use this BYTE value (default 255) 00:06:39.246 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.246 [-y verify result if this switch is on] 00:06:39.246 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.246 Can be used to spread operations across a wider range of memory. 00:06:39.246 00:40:31 -- common/autotest_common.sh@641 -- # es=1 00:06:39.246 00:40:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:39.246 00:40:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:39.246 00:40:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:39.246 00:06:39.246 real 0m0.052s 00:06:39.246 user 0m0.058s 00:06:39.246 sys 0m0.026s 00:06:39.246 00:40:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.246 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.246 ************************************ 00:06:39.246 END TEST accel_wrong_workload 00:06:39.246 ************************************ 00:06:39.246 00:40:31 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.246 00:40:31 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:06:39.246 00:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.246 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.246 ************************************ 00:06:39.246 START TEST accel_negative_buffers 00:06:39.246 ************************************ 00:06:39.246 00:40:31 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:39.246 00:40:31 -- common/autotest_common.sh@638 -- # local es=0 00:06:39.246 00:40:31 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:39.246 00:40:31 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:06:39.246 00:40:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.246 00:40:31 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:06:39.246 00:40:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:06:39.246 00:40:31 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:06:39.246 00:40:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:39.246 00:40:31 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.246 00:40:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.246 00:40:31 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:39.246 00:40:31 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:39.246 00:40:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.246 00:40:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.246 00:40:31 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.246 00:40:31 -- accel/accel.sh@41 -- # jq -r . 00:06:39.246 -x option must be non-negative. 00:06:39.246 [2024-04-27 00:40:31.880169] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:39.246 accel_perf options: 00:06:39.246 [-h help message] 00:06:39.246 [-q queue depth per core] 00:06:39.246 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:39.246 [-T number of threads per core 00:06:39.246 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:39.246 [-t time in seconds] 00:06:39.246 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:39.246 [ dif_verify, , dif_generate, dif_generate_copy 00:06:39.246 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:39.246 [-l for compress/decompress workloads, name of uncompressed input file 00:06:39.246 [-S for crc32c workload, use this seed value (default 0) 00:06:39.246 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:39.246 [-f for fill workload, use this BYTE value (default 255) 00:06:39.246 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:39.246 [-y verify result if this switch is on] 00:06:39.246 [-a tasks to allocate per core (default: same value as -q)] 00:06:39.246 Can be used to spread operations across a wider range of memory. 00:06:39.246 00:40:31 -- common/autotest_common.sh@641 -- # es=1 00:06:39.246 00:40:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:06:39.246 00:40:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:06:39.246 00:40:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:06:39.246 00:06:39.246 real 0m0.053s 00:06:39.246 user 0m0.055s 00:06:39.246 sys 0m0.029s 00:06:39.246 00:40:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:39.246 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.246 ************************************ 00:06:39.246 END TEST accel_negative_buffers 00:06:39.246 ************************************ 00:06:39.246 00:40:31 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:39.246 00:40:31 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:39.246 00:40:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.246 00:40:31 -- common/autotest_common.sh@10 -- # set +x 00:06:39.507 ************************************ 00:06:39.507 START TEST accel_crc32c 00:06:39.507 ************************************ 00:06:39.507 00:40:32 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:39.507 00:40:32 -- accel/accel.sh@16 -- # local accel_opc 00:06:39.507 00:40:32 -- accel/accel.sh@17 -- # local accel_module 00:06:39.507 00:40:32 -- accel/accel.sh@19 -- # IFS=: 00:06:39.507 00:40:32 -- accel/accel.sh@19 -- # read -r var val 00:06:39.507 00:40:32 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:39.507 00:40:32 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:39.507 00:40:32 -- accel/accel.sh@12 -- # build_accel_config 00:06:39.507 00:40:32 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.507 00:40:32 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:39.507 00:40:32 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:39.507 00:40:32 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:39.507 00:40:32 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:39.507 00:40:32 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.507 00:40:32 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.507 00:40:32 -- accel/accel.sh@40 -- # local IFS=, 00:06:39.507 00:40:32 -- accel/accel.sh@41 -- # jq -r . 00:06:39.507 [2024-04-27 00:40:32.039557] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:39.507 [2024-04-27 00:40:32.039657] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572782 ] 00:06:39.507 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.507 [2024-04-27 00:40:32.157294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.768 [2024-04-27 00:40:32.249862] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.768 [2024-04-27 00:40:32.254368] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:39.768 [2024-04-27 00:40:32.262331] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=0x1 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=crc32c 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=32 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=dsa 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=32 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=32 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=1 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val=Yes 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:46.354 00:40:38 -- accel/accel.sh@20 -- # val= 00:06:46.354 00:40:38 -- accel/accel.sh@21 -- # case "$var" in 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # IFS=: 00:06:46.354 00:40:38 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@20 -- # val= 00:06:49.651 00:40:41 -- accel/accel.sh@21 -- # case "$var" in 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:49.651 00:40:41 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:49.651 00:40:41 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:49.651 00:06:49.651 real 0m9.659s 00:06:49.651 user 0m3.261s 00:06:49.651 sys 0m0.234s 00:06:49.651 00:40:41 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:49.651 00:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:49.651 ************************************ 00:06:49.651 END TEST accel_crc32c 00:06:49.651 ************************************ 00:06:49.651 00:40:41 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:49.651 00:40:41 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:06:49.651 00:40:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:49.651 00:40:41 -- common/autotest_common.sh@10 -- # set +x 00:06:49.651 ************************************ 00:06:49.651 START TEST accel_crc32c_C2 00:06:49.651 ************************************ 00:06:49.651 00:40:41 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:49.651 00:40:41 -- accel/accel.sh@16 -- # local accel_opc 00:06:49.651 00:40:41 -- accel/accel.sh@17 -- # local accel_module 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # IFS=: 00:06:49.651 00:40:41 -- accel/accel.sh@19 -- # read -r var val 00:06:49.651 00:40:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:49.651 00:40:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:49.651 00:40:41 -- accel/accel.sh@12 -- # build_accel_config 00:06:49.651 00:40:41 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:49.651 00:40:41 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:49.651 00:40:41 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:49.651 00:40:41 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:49.651 00:40:41 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:49.651 00:40:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:49.651 00:40:41 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:49.651 00:40:41 -- accel/accel.sh@40 -- # local IFS=, 00:06:49.651 00:40:41 -- accel/accel.sh@41 -- # jq -r . 00:06:49.651 [2024-04-27 00:40:41.807651] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:49.651 [2024-04-27 00:40:41.807756] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2574843 ] 00:06:49.651 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.651 [2024-04-27 00:40:41.912314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.651 [2024-04-27 00:40:42.001334] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.651 [2024-04-27 00:40:42.005782] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:49.651 [2024-04-27 00:40:42.013751] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=0x1 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=crc32c 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=0 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=dsa 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@22 -- # accel_module=dsa 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=32 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=32 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=1 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val=Yes 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:56.258 00:40:48 -- accel/accel.sh@20 -- # val= 00:06:56.258 00:40:48 -- accel/accel.sh@21 -- # case "$var" in 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # IFS=: 00:06:56.258 00:40:48 -- accel/accel.sh@19 -- # read -r var val 00:06:58.794 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.794 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.794 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.794 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.794 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.794 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.794 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.794 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.794 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.794 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.794 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.795 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.795 00:40:51 -- accel/accel.sh@20 -- # val= 00:06:58.795 00:40:51 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.795 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:58.795 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:58.795 00:40:51 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:06:58.795 00:40:51 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.795 00:40:51 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:06:58.795 00:06:58.795 real 0m9.655s 00:06:58.795 user 0m3.263s 00:06:58.795 sys 0m0.226s 00:06:58.795 00:40:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:06:58.795 00:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:58.795 ************************************ 00:06:58.795 END TEST accel_crc32c_C2 00:06:58.795 ************************************ 00:06:58.795 00:40:51 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:58.795 00:40:51 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:06:58.795 00:40:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:58.795 00:40:51 -- common/autotest_common.sh@10 -- # set +x 00:06:59.054 ************************************ 00:06:59.054 START TEST accel_copy 00:06:59.054 ************************************ 00:06:59.054 00:40:51 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:06:59.054 00:40:51 -- accel/accel.sh@16 -- # local accel_opc 00:06:59.054 00:40:51 -- accel/accel.sh@17 -- # local accel_module 00:06:59.054 00:40:51 -- accel/accel.sh@19 -- # IFS=: 00:06:59.054 00:40:51 -- accel/accel.sh@19 -- # read -r var val 00:06:59.054 00:40:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.054 00:40:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.054 00:40:51 -- accel/accel.sh@12 -- # build_accel_config 00:06:59.054 00:40:51 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.054 00:40:51 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:06:59.054 00:40:51 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:06:59.054 00:40:51 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:06:59.054 00:40:51 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:06:59.054 00:40:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.054 00:40:51 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.054 00:40:51 -- accel/accel.sh@40 -- # local IFS=, 00:06:59.054 00:40:51 -- accel/accel.sh@41 -- # jq -r . 00:06:59.054 [2024-04-27 00:40:51.581243] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:06:59.054 [2024-04-27 00:40:51.581345] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2576675 ] 00:06:59.054 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.054 [2024-04-27 00:40:51.696426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.312 [2024-04-27 00:40:51.787844] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.312 [2024-04-27 00:40:51.792349] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:06:59.312 [2024-04-27 00:40:51.800312] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=0x1 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=copy 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@23 -- # accel_opc=copy 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=dsa 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=32 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=32 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.885 00:40:58 -- accel/accel.sh@20 -- # val=1 00:07:05.885 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.885 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.886 00:40:58 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.886 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.886 00:40:58 -- accel/accel.sh@20 -- # val=Yes 00:07:05.886 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.886 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.886 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:05.886 00:40:58 -- accel/accel.sh@20 -- # val= 00:07:05.886 00:40:58 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # IFS=: 00:07:05.886 00:40:58 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@20 -- # val= 00:07:09.174 00:41:01 -- accel/accel.sh@21 -- # case "$var" in 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:09.174 00:41:01 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:09.174 00:41:01 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:09.174 00:07:09.174 real 0m9.717s 00:07:09.174 user 0m3.318s 00:07:09.174 sys 0m0.231s 00:07:09.174 00:41:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:09.174 00:41:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.174 ************************************ 00:07:09.174 END TEST accel_copy 00:07:09.174 ************************************ 00:07:09.174 00:41:01 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.174 00:41:01 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:09.174 00:41:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:09.174 00:41:01 -- common/autotest_common.sh@10 -- # set +x 00:07:09.174 ************************************ 00:07:09.174 START TEST accel_fill 00:07:09.174 ************************************ 00:07:09.174 00:41:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.174 00:41:01 -- accel/accel.sh@16 -- # local accel_opc 00:07:09.174 00:41:01 -- accel/accel.sh@17 -- # local accel_module 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # IFS=: 00:07:09.174 00:41:01 -- accel/accel.sh@19 -- # read -r var val 00:07:09.174 00:41:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.174 00:41:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:09.174 00:41:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:09.174 00:41:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.174 00:41:01 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:09.174 00:41:01 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:09.174 00:41:01 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:09.174 00:41:01 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:09.174 00:41:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.174 00:41:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.174 00:41:01 -- accel/accel.sh@40 -- # local IFS=, 00:07:09.174 00:41:01 -- accel/accel.sh@41 -- # jq -r . 00:07:09.174 [2024-04-27 00:41:01.409851] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:09.174 [2024-04-27 00:41:01.409949] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2578780 ] 00:07:09.174 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.174 [2024-04-27 00:41:01.521586] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.174 [2024-04-27 00:41:01.611375] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.174 [2024-04-27 00:41:01.615856] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:09.174 [2024-04-27 00:41:01.623824] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=0x1 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=fill 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@23 -- # accel_opc=fill 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=0x80 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=dsa 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=64 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=64 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=1 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val=Yes 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:15.768 00:41:08 -- accel/accel.sh@20 -- # val= 00:07:15.768 00:41:08 -- accel/accel.sh@21 -- # case "$var" in 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # IFS=: 00:07:15.768 00:41:08 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@20 -- # val= 00:07:19.061 00:41:11 -- accel/accel.sh@21 -- # case "$var" in 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:19.061 00:41:11 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:19.061 00:41:11 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:19.061 00:07:19.061 real 0m9.650s 00:07:19.061 user 0m3.247s 00:07:19.061 sys 0m0.241s 00:07:19.061 00:41:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:19.061 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 ************************************ 00:07:19.061 END TEST accel_fill 00:07:19.061 ************************************ 00:07:19.061 00:41:11 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:19.061 00:41:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:19.061 00:41:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.061 00:41:11 -- common/autotest_common.sh@10 -- # set +x 00:07:19.061 ************************************ 00:07:19.061 START TEST accel_copy_crc32c 00:07:19.061 ************************************ 00:07:19.061 00:41:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:07:19.061 00:41:11 -- accel/accel.sh@16 -- # local accel_opc 00:07:19.061 00:41:11 -- accel/accel.sh@17 -- # local accel_module 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # IFS=: 00:07:19.061 00:41:11 -- accel/accel.sh@19 -- # read -r var val 00:07:19.061 00:41:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:19.061 00:41:11 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:19.061 00:41:11 -- accel/accel.sh@12 -- # build_accel_config 00:07:19.061 00:41:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.061 00:41:11 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:19.061 00:41:11 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:19.061 00:41:11 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:19.061 00:41:11 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:19.061 00:41:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.061 00:41:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.061 00:41:11 -- accel/accel.sh@40 -- # local IFS=, 00:07:19.061 00:41:11 -- accel/accel.sh@41 -- # jq -r . 00:07:19.061 [2024-04-27 00:41:11.171425] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:19.061 [2024-04-27 00:41:11.171527] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2580599 ] 00:07:19.061 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.061 [2024-04-27 00:41:11.262635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:19.061 [2024-04-27 00:41:11.353192] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.061 [2024-04-27 00:41:11.357710] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:19.061 [2024-04-27 00:41:11.365674] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=0x1 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=0 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=dsa 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=32 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=32 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=1 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.657 00:41:17 -- accel/accel.sh@20 -- # val=Yes 00:07:25.657 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.657 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.658 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.658 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.658 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.658 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.658 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:25.658 00:41:17 -- accel/accel.sh@20 -- # val= 00:07:25.658 00:41:17 -- accel/accel.sh@21 -- # case "$var" in 00:07:25.658 00:41:17 -- accel/accel.sh@19 -- # IFS=: 00:07:25.658 00:41:17 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@20 -- # val= 00:07:28.201 00:41:20 -- accel/accel.sh@21 -- # case "$var" in 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.201 00:41:20 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:28.201 00:41:20 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:28.201 00:41:20 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:28.201 00:07:28.201 real 0m9.644s 00:07:28.201 user 0m3.258s 00:07:28.201 sys 0m0.225s 00:07:28.201 00:41:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:28.201 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.201 ************************************ 00:07:28.201 END TEST accel_copy_crc32c 00:07:28.201 ************************************ 00:07:28.201 00:41:20 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.201 00:41:20 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:28.201 00:41:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.201 00:41:20 -- common/autotest_common.sh@10 -- # set +x 00:07:28.201 ************************************ 00:07:28.201 START TEST accel_copy_crc32c_C2 00:07:28.201 ************************************ 00:07:28.201 00:41:20 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:28.201 00:41:20 -- accel/accel.sh@16 -- # local accel_opc 00:07:28.201 00:41:20 -- accel/accel.sh@17 -- # local accel_module 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # IFS=: 00:07:28.201 00:41:20 -- accel/accel.sh@19 -- # read -r var val 00:07:28.501 00:41:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:28.501 00:41:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:28.501 00:41:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:28.501 00:41:20 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:28.501 00:41:20 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:28.501 00:41:20 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:28.501 00:41:20 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:28.501 00:41:20 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:28.501 00:41:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:28.501 00:41:20 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:28.501 00:41:20 -- accel/accel.sh@40 -- # local IFS=, 00:07:28.501 00:41:20 -- accel/accel.sh@41 -- # jq -r . 00:07:28.501 [2024-04-27 00:41:20.938981] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:28.501 [2024-04-27 00:41:20.939133] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2582643 ] 00:07:28.501 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.501 [2024-04-27 00:41:21.069076] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.501 [2024-04-27 00:41:21.161811] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.501 [2024-04-27 00:41:21.166368] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:28.501 [2024-04-27 00:41:21.174318] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val=0x1 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val=0 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.107 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.107 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.107 00:41:27 -- accel/accel.sh@20 -- # val=dsa 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val=32 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val=32 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val=1 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val=Yes 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:35.108 00:41:27 -- accel/accel.sh@20 -- # val= 00:07:35.108 00:41:27 -- accel/accel.sh@21 -- # case "$var" in 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # IFS=: 00:07:35.108 00:41:27 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@20 -- # val= 00:07:38.409 00:41:30 -- accel/accel.sh@21 -- # case "$var" in 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:38.409 00:41:30 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:38.409 00:41:30 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:38.409 00:07:38.409 real 0m9.693s 00:07:38.409 user 0m3.276s 00:07:38.409 sys 0m0.252s 00:07:38.409 00:41:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:38.409 00:41:30 -- common/autotest_common.sh@10 -- # set +x 00:07:38.409 ************************************ 00:07:38.409 END TEST accel_copy_crc32c_C2 00:07:38.409 ************************************ 00:07:38.409 00:41:30 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:38.409 00:41:30 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:38.409 00:41:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:38.409 00:41:30 -- common/autotest_common.sh@10 -- # set +x 00:07:38.409 ************************************ 00:07:38.409 START TEST accel_dualcast 00:07:38.409 ************************************ 00:07:38.409 00:41:30 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:07:38.409 00:41:30 -- accel/accel.sh@16 -- # local accel_opc 00:07:38.409 00:41:30 -- accel/accel.sh@17 -- # local accel_module 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # IFS=: 00:07:38.409 00:41:30 -- accel/accel.sh@19 -- # read -r var val 00:07:38.409 00:41:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:38.409 00:41:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:38.409 00:41:30 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.409 00:41:30 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:38.409 00:41:30 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:38.409 00:41:30 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:38.409 00:41:30 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:38.409 00:41:30 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:38.409 00:41:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.409 00:41:30 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:38.409 00:41:30 -- accel/accel.sh@40 -- # local IFS=, 00:07:38.409 00:41:30 -- accel/accel.sh@41 -- # jq -r . 00:07:38.409 [2024-04-27 00:41:30.742574] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:38.409 [2024-04-27 00:41:30.742684] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2584523 ] 00:07:38.409 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.409 [2024-04-27 00:41:30.859834] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.409 [2024-04-27 00:41:30.949622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.410 [2024-04-27 00:41:30.954116] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:38.410 [2024-04-27 00:41:30.962084] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=0x1 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=dualcast 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=dsa 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=32 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=32 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=1 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val=Yes 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:44.990 00:41:37 -- accel/accel.sh@20 -- # val= 00:07:44.990 00:41:37 -- accel/accel.sh@21 -- # case "$var" in 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # IFS=: 00:07:44.990 00:41:37 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@20 -- # val= 00:07:48.283 00:41:40 -- accel/accel.sh@21 -- # case "$var" in 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:48.283 00:41:40 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:48.283 00:41:40 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:48.283 00:07:48.283 real 0m9.658s 00:07:48.283 user 0m3.260s 00:07:48.283 sys 0m0.237s 00:07:48.283 00:41:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:48.283 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:48.283 ************************************ 00:07:48.283 END TEST accel_dualcast 00:07:48.283 ************************************ 00:07:48.283 00:41:40 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:48.283 00:41:40 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:48.283 00:41:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.283 00:41:40 -- common/autotest_common.sh@10 -- # set +x 00:07:48.283 ************************************ 00:07:48.283 START TEST accel_compare 00:07:48.283 ************************************ 00:07:48.283 00:41:40 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:07:48.283 00:41:40 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.283 00:41:40 -- accel/accel.sh@17 -- # local accel_module 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # IFS=: 00:07:48.283 00:41:40 -- accel/accel.sh@19 -- # read -r var val 00:07:48.283 00:41:40 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:48.283 00:41:40 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:48.283 00:41:40 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.283 00:41:40 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:48.283 00:41:40 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:48.283 00:41:40 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:48.283 00:41:40 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:48.283 00:41:40 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:48.283 00:41:40 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.283 00:41:40 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:48.283 00:41:40 -- accel/accel.sh@40 -- # local IFS=, 00:07:48.283 00:41:40 -- accel/accel.sh@41 -- # jq -r . 00:07:48.283 [2024-04-27 00:41:40.536768] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:48.283 [2024-04-27 00:41:40.536854] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2586440 ] 00:07:48.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.283 [2024-04-27 00:41:40.633466] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.283 [2024-04-27 00:41:40.729490] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:48.283 [2024-04-27 00:41:40.733996] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:48.283 [2024-04-27 00:41:40.741955] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=0x1 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=compare 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@23 -- # accel_opc=compare 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=dsa 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@22 -- # accel_module=dsa 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=32 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=32 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=1 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.861 00:41:47 -- accel/accel.sh@20 -- # val=Yes 00:07:54.861 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.861 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.862 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.862 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.862 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.862 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:54.862 00:41:47 -- accel/accel.sh@20 -- # val= 00:07:54.862 00:41:47 -- accel/accel.sh@21 -- # case "$var" in 00:07:54.862 00:41:47 -- accel/accel.sh@19 -- # IFS=: 00:07:54.862 00:41:47 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@20 -- # val= 00:07:58.153 00:41:50 -- accel/accel.sh@21 -- # case "$var" in 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:07:58.153 00:41:50 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:58.153 00:41:50 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:07:58.153 00:07:58.153 real 0m9.662s 00:07:58.153 user 0m3.276s 00:07:58.153 sys 0m0.223s 00:07:58.153 00:41:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:07:58.153 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:07:58.153 ************************************ 00:07:58.153 END TEST accel_compare 00:07:58.153 ************************************ 00:07:58.153 00:41:50 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:58.153 00:41:50 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:58.153 00:41:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.153 00:41:50 -- common/autotest_common.sh@10 -- # set +x 00:07:58.153 ************************************ 00:07:58.153 START TEST accel_xor 00:07:58.153 ************************************ 00:07:58.153 00:41:50 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:07:58.153 00:41:50 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.153 00:41:50 -- accel/accel.sh@17 -- # local accel_module 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # IFS=: 00:07:58.153 00:41:50 -- accel/accel.sh@19 -- # read -r var val 00:07:58.153 00:41:50 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:58.153 00:41:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:58.153 00:41:50 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.153 00:41:50 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:58.153 00:41:50 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:07:58.153 00:41:50 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:07:58.153 00:41:50 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:07:58.153 00:41:50 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:07:58.153 00:41:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.153 00:41:50 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:58.153 00:41:50 -- accel/accel.sh@40 -- # local IFS=, 00:07:58.153 00:41:50 -- accel/accel.sh@41 -- # jq -r . 00:07:58.153 [2024-04-27 00:41:50.325320] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:07:58.154 [2024-04-27 00:41:50.325453] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2588457 ] 00:07:58.154 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.154 [2024-04-27 00:41:50.418971] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.154 [2024-04-27 00:41:50.507882] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:58.154 [2024-04-27 00:41:50.512458] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:07:58.154 [2024-04-27 00:41:50.520425] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=0x1 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=xor 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=2 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=software 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@22 -- # accel_module=software 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=32 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=32 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=1 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val=Yes 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:04.729 00:41:56 -- accel/accel.sh@20 -- # val= 00:08:04.729 00:41:56 -- accel/accel.sh@21 -- # case "$var" in 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # IFS=: 00:08:04.729 00:41:56 -- accel/accel.sh@19 -- # read -r var val 00:08:07.363 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.363 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.363 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.363 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.363 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.363 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.363 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.363 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.364 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.364 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.364 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.364 00:41:59 -- accel/accel.sh@20 -- # val= 00:08:07.364 00:41:59 -- accel/accel.sh@21 -- # case "$var" in 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # IFS=: 00:08:07.364 00:41:59 -- accel/accel.sh@19 -- # read -r var val 00:08:07.364 00:41:59 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:07.364 00:41:59 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:07.364 00:41:59 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:07.364 00:08:07.364 real 0m9.636s 00:08:07.364 user 0m3.274s 00:08:07.364 sys 0m0.206s 00:08:07.364 00:41:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:07.364 00:41:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.364 ************************************ 00:08:07.364 END TEST accel_xor 00:08:07.364 ************************************ 00:08:07.364 00:41:59 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:08:07.364 00:41:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:07.364 00:41:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:07.364 00:41:59 -- common/autotest_common.sh@10 -- # set +x 00:08:07.364 ************************************ 00:08:07.364 START TEST accel_xor 00:08:07.364 ************************************ 00:08:07.364 00:42:00 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:08:07.364 00:42:00 -- accel/accel.sh@16 -- # local accel_opc 00:08:07.364 00:42:00 -- accel/accel.sh@17 -- # local accel_module 00:08:07.364 00:42:00 -- accel/accel.sh@19 -- # IFS=: 00:08:07.364 00:42:00 -- accel/accel.sh@19 -- # read -r var val 00:08:07.364 00:42:00 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:08:07.364 00:42:00 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:08:07.364 00:42:00 -- accel/accel.sh@12 -- # build_accel_config 00:08:07.364 00:42:00 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:07.364 00:42:00 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:07.364 00:42:00 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:07.364 00:42:00 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:07.364 00:42:00 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:07.364 00:42:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:07.364 00:42:00 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:07.364 00:42:00 -- accel/accel.sh@40 -- # local IFS=, 00:08:07.364 00:42:00 -- accel/accel.sh@41 -- # jq -r . 00:08:07.623 [2024-04-27 00:42:00.082369] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:07.623 [2024-04-27 00:42:00.082475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2590293 ] 00:08:07.623 EAL: No free 2048 kB hugepages reported on node 1 00:08:07.623 [2024-04-27 00:42:00.192730] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.623 [2024-04-27 00:42:00.282950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.623 [2024-04-27 00:42:00.287425] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:07.623 [2024-04-27 00:42:00.295388] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val=0x1 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val=xor 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@23 -- # accel_opc=xor 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val=3 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val=software 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@22 -- # accel_module=software 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.201 00:42:06 -- accel/accel.sh@20 -- # val=32 00:08:14.201 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.201 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val=32 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val=1 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val=Yes 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:14.202 00:42:06 -- accel/accel.sh@20 -- # val= 00:08:14.202 00:42:06 -- accel/accel.sh@21 -- # case "$var" in 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # IFS=: 00:08:14.202 00:42:06 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@20 -- # val= 00:08:17.493 00:42:09 -- accel/accel.sh@21 -- # case "$var" in 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:17.493 00:42:09 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:08:17.493 00:42:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.493 00:08:17.493 real 0m9.655s 00:08:17.493 user 0m3.273s 00:08:17.493 sys 0m0.221s 00:08:17.493 00:42:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:17.493 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.493 ************************************ 00:08:17.493 END TEST accel_xor 00:08:17.493 ************************************ 00:08:17.493 00:42:09 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:08:17.493 00:42:09 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:17.493 00:42:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.493 00:42:09 -- common/autotest_common.sh@10 -- # set +x 00:08:17.493 ************************************ 00:08:17.493 START TEST accel_dif_verify 00:08:17.493 ************************************ 00:08:17.493 00:42:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:08:17.493 00:42:09 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.493 00:42:09 -- accel/accel.sh@17 -- # local accel_module 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # IFS=: 00:08:17.493 00:42:09 -- accel/accel.sh@19 -- # read -r var val 00:08:17.493 00:42:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:17.493 00:42:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:17.493 00:42:09 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.493 00:42:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:17.493 00:42:09 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:17.493 00:42:09 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:17.493 00:42:09 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:17.493 00:42:09 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:17.493 00:42:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.493 00:42:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:17.493 00:42:09 -- accel/accel.sh@40 -- # local IFS=, 00:08:17.493 00:42:09 -- accel/accel.sh@41 -- # jq -r . 00:08:17.493 [2024-04-27 00:42:09.854764] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:17.493 [2024-04-27 00:42:09.854829] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2592882 ] 00:08:17.493 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.493 [2024-04-27 00:42:09.937939] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.493 [2024-04-27 00:42:10.032099] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.493 [2024-04-27 00:42:10.036573] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:17.493 [2024-04-27 00:42:10.044540] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=0x1 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=dif_verify 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=dsa 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=32 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=32 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=1 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val=No 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:24.066 00:42:16 -- accel/accel.sh@20 -- # val= 00:08:24.066 00:42:16 -- accel/accel.sh@21 -- # case "$var" in 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # IFS=: 00:08:24.066 00:42:16 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@20 -- # val= 00:08:27.359 00:42:19 -- accel/accel.sh@21 -- # case "$var" in 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:27.359 00:42:19 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:08:27.359 00:42:19 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:27.359 00:08:27.359 real 0m9.632s 00:08:27.359 user 0m3.263s 00:08:27.359 sys 0m0.202s 00:08:27.359 00:42:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:27.359 00:42:19 -- common/autotest_common.sh@10 -- # set +x 00:08:27.359 ************************************ 00:08:27.359 END TEST accel_dif_verify 00:08:27.359 ************************************ 00:08:27.359 00:42:19 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:27.359 00:42:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:27.359 00:42:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:27.359 00:42:19 -- common/autotest_common.sh@10 -- # set +x 00:08:27.359 ************************************ 00:08:27.359 START TEST accel_dif_generate 00:08:27.359 ************************************ 00:08:27.359 00:42:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:08:27.359 00:42:19 -- accel/accel.sh@16 -- # local accel_opc 00:08:27.359 00:42:19 -- accel/accel.sh@17 -- # local accel_module 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # IFS=: 00:08:27.359 00:42:19 -- accel/accel.sh@19 -- # read -r var val 00:08:27.359 00:42:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:27.359 00:42:19 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:27.359 00:42:19 -- accel/accel.sh@12 -- # build_accel_config 00:08:27.359 00:42:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:27.359 00:42:19 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:27.359 00:42:19 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:27.359 00:42:19 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:27.359 00:42:19 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:27.359 00:42:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:27.359 00:42:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:27.359 00:42:19 -- accel/accel.sh@40 -- # local IFS=, 00:08:27.359 00:42:19 -- accel/accel.sh@41 -- # jq -r . 00:08:27.359 [2024-04-27 00:42:19.586260] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:27.359 [2024-04-27 00:42:19.586359] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2594729 ] 00:08:27.359 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.359 [2024-04-27 00:42:19.697490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.359 [2024-04-27 00:42:19.786317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.359 [2024-04-27 00:42:19.790781] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:27.359 [2024-04-27 00:42:19.798754] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=0x1 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=dif_generate 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val='512 bytes' 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val='8 bytes' 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=software 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@22 -- # accel_module=software 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=32 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=32 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=1 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val=No 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:33.938 00:42:26 -- accel/accel.sh@20 -- # val= 00:08:33.938 00:42:26 -- accel/accel.sh@21 -- # case "$var" in 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # IFS=: 00:08:33.938 00:42:26 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@20 -- # val= 00:08:37.238 00:42:29 -- accel/accel.sh@21 -- # case "$var" in 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:08:37.238 00:42:29 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:08:37.238 00:42:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:37.238 00:08:37.238 real 0m9.664s 00:08:37.238 user 0m3.256s 00:08:37.238 sys 0m0.243s 00:08:37.238 00:42:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:37.238 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 ************************************ 00:08:37.238 END TEST accel_dif_generate 00:08:37.238 ************************************ 00:08:37.238 00:42:29 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:37.238 00:42:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:37.238 00:42:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:37.238 00:42:29 -- common/autotest_common.sh@10 -- # set +x 00:08:37.238 ************************************ 00:08:37.238 START TEST accel_dif_generate_copy 00:08:37.238 ************************************ 00:08:37.238 00:42:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:08:37.238 00:42:29 -- accel/accel.sh@16 -- # local accel_opc 00:08:37.238 00:42:29 -- accel/accel.sh@17 -- # local accel_module 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # IFS=: 00:08:37.238 00:42:29 -- accel/accel.sh@19 -- # read -r var val 00:08:37.238 00:42:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:37.238 00:42:29 -- accel/accel.sh@12 -- # build_accel_config 00:08:37.238 00:42:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:37.238 00:42:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:37.238 00:42:29 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:37.238 00:42:29 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:37.238 00:42:29 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:37.238 00:42:29 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:37.238 00:42:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:37.238 00:42:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:37.238 00:42:29 -- accel/accel.sh@40 -- # local IFS=, 00:08:37.238 00:42:29 -- accel/accel.sh@41 -- # jq -r . 00:08:37.238 [2024-04-27 00:42:29.375431] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:37.238 [2024-04-27 00:42:29.375537] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2596693 ] 00:08:37.238 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.238 [2024-04-27 00:42:29.493478] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:37.238 [2024-04-27 00:42:29.586318] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.238 [2024-04-27 00:42:29.590824] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:37.238 [2024-04-27 00:42:29.598794] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val=0x1 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val=dsa 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@22 -- # accel_module=dsa 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.859 00:42:35 -- accel/accel.sh@20 -- # val=32 00:08:43.859 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.859 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val=32 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val=1 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val=No 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:43.860 00:42:35 -- accel/accel.sh@20 -- # val= 00:08:43.860 00:42:35 -- accel/accel.sh@21 -- # case "$var" in 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # IFS=: 00:08:43.860 00:42:35 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@20 -- # val= 00:08:46.394 00:42:38 -- accel/accel.sh@21 -- # case "$var" in 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # IFS=: 00:08:46.394 00:42:38 -- accel/accel.sh@19 -- # read -r var val 00:08:46.394 00:42:38 -- accel/accel.sh@27 -- # [[ -n dsa ]] 00:08:46.394 00:42:38 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:08:46.394 00:42:38 -- accel/accel.sh@27 -- # [[ dsa == \d\s\a ]] 00:08:46.394 00:08:46.394 real 0m9.665s 00:08:46.394 user 0m3.253s 00:08:46.394 sys 0m0.244s 00:08:46.394 00:42:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:46.394 00:42:38 -- common/autotest_common.sh@10 -- # set +x 00:08:46.394 ************************************ 00:08:46.394 END TEST accel_dif_generate_copy 00:08:46.394 ************************************ 00:08:46.394 00:42:39 -- accel/accel.sh@115 -- # [[ y == y ]] 00:08:46.394 00:42:39 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:46.394 00:42:39 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:46.394 00:42:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:46.394 00:42:39 -- common/autotest_common.sh@10 -- # set +x 00:08:46.651 ************************************ 00:08:46.651 START TEST accel_comp 00:08:46.651 ************************************ 00:08:46.652 00:42:39 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:46.652 00:42:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:46.652 00:42:39 -- accel/accel.sh@17 -- # local accel_module 00:08:46.652 00:42:39 -- accel/accel.sh@19 -- # IFS=: 00:08:46.652 00:42:39 -- accel/accel.sh@19 -- # read -r var val 00:08:46.652 00:42:39 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:46.652 00:42:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:46.652 00:42:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:46.652 00:42:39 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:46.652 00:42:39 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:46.652 00:42:39 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:46.652 00:42:39 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:46.652 00:42:39 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:46.652 00:42:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:46.652 00:42:39 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:46.652 00:42:39 -- accel/accel.sh@40 -- # local IFS=, 00:08:46.652 00:42:39 -- accel/accel.sh@41 -- # jq -r . 00:08:46.652 [2024-04-27 00:42:39.161890] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:46.652 [2024-04-27 00:42:39.162001] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2598652 ] 00:08:46.652 EAL: No free 2048 kB hugepages reported on node 1 00:08:46.652 [2024-04-27 00:42:39.271833] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.910 [2024-04-27 00:42:39.361541] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.910 [2024-04-27 00:42:39.365986] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:46.911 [2024-04-27 00:42:39.373956] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=0x1 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=compress 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@23 -- # accel_opc=compress 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val='4096 bytes' 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=iaa 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@22 -- # accel_module=iaa 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=32 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=32 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=1 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val='1 seconds' 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val=No 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:53.482 00:42:45 -- accel/accel.sh@20 -- # val= 00:08:53.482 00:42:45 -- accel/accel.sh@21 -- # case "$var" in 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # IFS=: 00:08:53.482 00:42:45 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@20 -- # val= 00:08:56.781 00:42:48 -- accel/accel.sh@21 -- # case "$var" in 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:08:56.781 00:42:48 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:08:56.781 00:42:48 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:08:56.781 00:08:56.781 real 0m9.654s 00:08:56.781 user 0m3.262s 00:08:56.781 sys 0m0.230s 00:08:56.781 00:42:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:08:56.781 00:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:56.781 ************************************ 00:08:56.781 END TEST accel_comp 00:08:56.781 ************************************ 00:08:56.781 00:42:48 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:56.781 00:42:48 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:56.781 00:42:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.781 00:42:48 -- common/autotest_common.sh@10 -- # set +x 00:08:56.781 ************************************ 00:08:56.781 START TEST accel_decomp 00:08:56.781 ************************************ 00:08:56.781 00:42:48 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:56.781 00:42:48 -- accel/accel.sh@16 -- # local accel_opc 00:08:56.781 00:42:48 -- accel/accel.sh@17 -- # local accel_module 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # IFS=: 00:08:56.781 00:42:48 -- accel/accel.sh@19 -- # read -r var val 00:08:56.781 00:42:48 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:56.781 00:42:48 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y 00:08:56.781 00:42:48 -- accel/accel.sh@12 -- # build_accel_config 00:08:56.781 00:42:48 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:08:56.781 00:42:48 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:08:56.781 00:42:48 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:08:56.781 00:42:48 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:08:56.781 00:42:48 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:08:56.781 00:42:48 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:56.781 00:42:48 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:08:56.781 00:42:48 -- accel/accel.sh@40 -- # local IFS=, 00:08:56.781 00:42:48 -- accel/accel.sh@41 -- # jq -r . 00:08:56.781 [2024-04-27 00:42:48.935240] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:08:56.781 [2024-04-27 00:42:48.935346] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2600468 ] 00:08:56.781 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.781 [2024-04-27 00:42:49.049364] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.781 [2024-04-27 00:42:49.141179] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.781 [2024-04-27 00:42:49.145630] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:08:56.781 [2024-04-27 00:42:49.153599] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=0x1 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=decompress 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=iaa 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=32 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=32 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=1 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val=Yes 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:03.353 00:42:55 -- accel/accel.sh@20 -- # val= 00:09:03.353 00:42:55 -- accel/accel.sh@21 -- # case "$var" in 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # IFS=: 00:09:03.353 00:42:55 -- accel/accel.sh@19 -- # read -r var val 00:09:05.890 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.890 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.890 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.890 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.890 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.890 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.890 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.890 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.890 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.891 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.891 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.891 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.891 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.891 00:42:58 -- accel/accel.sh@20 -- # val= 00:09:05.891 00:42:58 -- accel/accel.sh@21 -- # case "$var" in 00:09:05.891 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:05.891 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:05.891 00:42:58 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:05.891 00:42:58 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:05.891 00:42:58 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:05.891 00:09:05.891 real 0m9.657s 00:09:05.891 user 0m3.276s 00:09:05.891 sys 0m0.226s 00:09:05.891 00:42:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:05.891 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:05.891 ************************************ 00:09:05.891 END TEST accel_decomp 00:09:05.891 ************************************ 00:09:05.891 00:42:58 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:05.891 00:42:58 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:05.891 00:42:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:05.891 00:42:58 -- common/autotest_common.sh@10 -- # set +x 00:09:06.150 ************************************ 00:09:06.150 START TEST accel_decmop_full 00:09:06.150 ************************************ 00:09:06.150 00:42:58 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:06.150 00:42:58 -- accel/accel.sh@16 -- # local accel_opc 00:09:06.150 00:42:58 -- accel/accel.sh@17 -- # local accel_module 00:09:06.150 00:42:58 -- accel/accel.sh@19 -- # IFS=: 00:09:06.150 00:42:58 -- accel/accel.sh@19 -- # read -r var val 00:09:06.150 00:42:58 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:06.150 00:42:58 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 00:09:06.150 00:42:58 -- accel/accel.sh@12 -- # build_accel_config 00:09:06.150 00:42:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:06.150 00:42:58 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:06.150 00:42:58 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:06.150 00:42:58 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:06.150 00:42:58 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:06.150 00:42:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:06.150 00:42:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:06.150 00:42:58 -- accel/accel.sh@40 -- # local IFS=, 00:09:06.150 00:42:58 -- accel/accel.sh@41 -- # jq -r . 00:09:06.150 [2024-04-27 00:42:58.708237] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:06.150 [2024-04-27 00:42:58.708343] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2602567 ] 00:09:06.150 EAL: No free 2048 kB hugepages reported on node 1 00:09:06.150 [2024-04-27 00:42:58.824259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.408 [2024-04-27 00:42:58.914052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.408 [2024-04-27 00:42:58.918733] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:06.408 [2024-04-27 00:42:58.926702] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:12.986 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.986 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.986 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=0x1 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=decompress 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=iaa 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=32 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=32 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=1 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val=Yes 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:12.987 00:43:05 -- accel/accel.sh@20 -- # val= 00:09:12.987 00:43:05 -- accel/accel.sh@21 -- # case "$var" in 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # IFS=: 00:09:12.987 00:43:05 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@20 -- # val= 00:09:16.322 00:43:08 -- accel/accel.sh@21 -- # case "$var" in 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:16.322 00:43:08 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:16.322 00:43:08 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:16.322 00:09:16.322 real 0m9.678s 00:09:16.322 user 0m3.291s 00:09:16.322 sys 0m0.222s 00:09:16.322 00:43:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:16.322 00:43:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.322 ************************************ 00:09:16.322 END TEST accel_decmop_full 00:09:16.322 ************************************ 00:09:16.322 00:43:08 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:16.322 00:43:08 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:16.322 00:43:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:16.322 00:43:08 -- common/autotest_common.sh@10 -- # set +x 00:09:16.322 ************************************ 00:09:16.322 START TEST accel_decomp_mcore 00:09:16.322 ************************************ 00:09:16.322 00:43:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:16.322 00:43:08 -- accel/accel.sh@16 -- # local accel_opc 00:09:16.322 00:43:08 -- accel/accel.sh@17 -- # local accel_module 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # IFS=: 00:09:16.322 00:43:08 -- accel/accel.sh@19 -- # read -r var val 00:09:16.322 00:43:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:16.322 00:43:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:09:16.322 00:43:08 -- accel/accel.sh@12 -- # build_accel_config 00:09:16.322 00:43:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:16.322 00:43:08 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:16.322 00:43:08 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:16.322 00:43:08 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:16.322 00:43:08 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:16.322 00:43:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:16.322 00:43:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:16.322 00:43:08 -- accel/accel.sh@40 -- # local IFS=, 00:09:16.322 00:43:08 -- accel/accel.sh@41 -- # jq -r . 00:09:16.322 [2024-04-27 00:43:08.508387] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:16.322 [2024-04-27 00:43:08.508505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2604393 ] 00:09:16.322 EAL: No free 2048 kB hugepages reported on node 1 00:09:16.322 [2024-04-27 00:43:08.623421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:16.322 [2024-04-27 00:43:08.716678] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:16.322 [2024-04-27 00:43:08.716706] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:16.322 [2024-04-27 00:43:08.716819] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.322 [2024-04-27 00:43:08.716825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:16.322 [2024-04-27 00:43:08.721379] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:16.322 [2024-04-27 00:43:08.729339] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=0xf 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=decompress 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=iaa 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=32 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=32 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=1 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val=Yes 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:22.898 00:43:15 -- accel/accel.sh@20 -- # val= 00:09:22.898 00:43:15 -- accel/accel.sh@21 -- # case "$var" in 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # IFS=: 00:09:22.898 00:43:15 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@20 -- # val= 00:09:26.187 00:43:18 -- accel/accel.sh@21 -- # case "$var" in 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:26.187 00:43:18 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:26.187 00:43:18 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:26.187 00:09:26.187 real 0m9.700s 00:09:26.187 user 0m31.096s 00:09:26.187 sys 0m0.240s 00:09:26.187 00:43:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:26.187 00:43:18 -- common/autotest_common.sh@10 -- # set +x 00:09:26.187 ************************************ 00:09:26.187 END TEST accel_decomp_mcore 00:09:26.187 ************************************ 00:09:26.187 00:43:18 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:26.187 00:43:18 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:26.187 00:43:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:26.187 00:43:18 -- common/autotest_common.sh@10 -- # set +x 00:09:26.187 ************************************ 00:09:26.187 START TEST accel_decomp_full_mcore 00:09:26.187 ************************************ 00:09:26.187 00:43:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:26.187 00:43:18 -- accel/accel.sh@16 -- # local accel_opc 00:09:26.187 00:43:18 -- accel/accel.sh@17 -- # local accel_module 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # IFS=: 00:09:26.187 00:43:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:26.187 00:43:18 -- accel/accel.sh@19 -- # read -r var val 00:09:26.187 00:43:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:09:26.187 00:43:18 -- accel/accel.sh@12 -- # build_accel_config 00:09:26.187 00:43:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:26.187 00:43:18 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:26.187 00:43:18 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:26.187 00:43:18 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:26.187 00:43:18 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:26.187 00:43:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:26.187 00:43:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:26.187 00:43:18 -- accel/accel.sh@40 -- # local IFS=, 00:09:26.187 00:43:18 -- accel/accel.sh@41 -- # jq -r . 00:09:26.187 [2024-04-27 00:43:18.329514] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:26.187 [2024-04-27 00:43:18.329615] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2606490 ] 00:09:26.187 EAL: No free 2048 kB hugepages reported on node 1 00:09:26.187 [2024-04-27 00:43:18.446551] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:26.187 [2024-04-27 00:43:18.539016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:26.187 [2024-04-27 00:43:18.539121] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:26.187 [2024-04-27 00:43:18.539224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.187 [2024-04-27 00:43:18.539240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:26.187 [2024-04-27 00:43:18.543755] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:26.187 [2024-04-27 00:43:18.551721] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val=0xf 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.758 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.758 00:43:24 -- accel/accel.sh@20 -- # val=decompress 00:09:32.758 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.758 00:43:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=iaa 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=32 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=32 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=1 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val=Yes 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:32.759 00:43:24 -- accel/accel.sh@20 -- # val= 00:09:32.759 00:43:24 -- accel/accel.sh@21 -- # case "$var" in 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # IFS=: 00:09:32.759 00:43:24 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.047 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.047 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.047 00:43:27 -- accel/accel.sh@20 -- # val= 00:09:36.048 00:43:27 -- accel/accel.sh@21 -- # case "$var" in 00:09:36.048 00:43:27 -- accel/accel.sh@19 -- # IFS=: 00:09:36.048 00:43:27 -- accel/accel.sh@19 -- # read -r var val 00:09:36.048 00:43:28 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:36.048 00:43:28 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:36.048 00:43:28 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:36.048 00:09:36.048 real 0m9.714s 00:09:36.048 user 0m31.113s 00:09:36.048 sys 0m0.249s 00:09:36.048 00:43:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:36.048 00:43:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 ************************************ 00:09:36.048 END TEST accel_decomp_full_mcore 00:09:36.048 ************************************ 00:09:36.048 00:43:28 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:36.048 00:43:28 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:09:36.048 00:43:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:36.048 00:43:28 -- common/autotest_common.sh@10 -- # set +x 00:09:36.048 ************************************ 00:09:36.048 START TEST accel_decomp_mthread 00:09:36.048 ************************************ 00:09:36.048 00:43:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:36.048 00:43:28 -- accel/accel.sh@16 -- # local accel_opc 00:09:36.048 00:43:28 -- accel/accel.sh@17 -- # local accel_module 00:09:36.048 00:43:28 -- accel/accel.sh@19 -- # IFS=: 00:09:36.048 00:43:28 -- accel/accel.sh@19 -- # read -r var val 00:09:36.048 00:43:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:36.048 00:43:28 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -T 2 00:09:36.048 00:43:28 -- accel/accel.sh@12 -- # build_accel_config 00:09:36.048 00:43:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:36.048 00:43:28 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:36.048 00:43:28 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:36.048 00:43:28 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:36.048 00:43:28 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:36.048 00:43:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:36.048 00:43:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:36.048 00:43:28 -- accel/accel.sh@40 -- # local IFS=, 00:09:36.048 00:43:28 -- accel/accel.sh@41 -- # jq -r . 00:09:36.048 [2024-04-27 00:43:28.151819] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:36.048 [2024-04-27 00:43:28.151922] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2608321 ] 00:09:36.048 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.048 [2024-04-27 00:43:28.263936] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.048 [2024-04-27 00:43:28.353843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.048 [2024-04-27 00:43:28.358344] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:36.048 [2024-04-27 00:43:28.366305] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=0x1 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=decompress 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val='4096 bytes' 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=iaa 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=32 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=32 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=2 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val=Yes 00:09:42.625 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.625 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.625 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.626 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.626 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.626 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:42.626 00:43:34 -- accel/accel.sh@20 -- # val= 00:09:42.626 00:43:34 -- accel/accel.sh@21 -- # case "$var" in 00:09:42.626 00:43:34 -- accel/accel.sh@19 -- # IFS=: 00:09:42.626 00:43:34 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@20 -- # val= 00:09:45.187 00:43:37 -- accel/accel.sh@21 -- # case "$var" in 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.187 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.187 00:43:37 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:45.187 00:43:37 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:45.187 00:43:37 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:45.187 00:09:45.187 real 0m9.670s 00:09:45.187 user 0m3.265s 00:09:45.187 sys 0m0.231s 00:09:45.187 00:43:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:45.187 00:43:37 -- common/autotest_common.sh@10 -- # set +x 00:09:45.187 ************************************ 00:09:45.187 END TEST accel_decomp_mthread 00:09:45.187 ************************************ 00:09:45.187 00:43:37 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:45.187 00:43:37 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:09:45.187 00:43:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:45.187 00:43:37 -- common/autotest_common.sh@10 -- # set +x 00:09:45.448 ************************************ 00:09:45.448 START TEST accel_deomp_full_mthread 00:09:45.448 ************************************ 00:09:45.448 00:43:37 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:45.448 00:43:37 -- accel/accel.sh@16 -- # local accel_opc 00:09:45.448 00:43:37 -- accel/accel.sh@17 -- # local accel_module 00:09:45.448 00:43:37 -- accel/accel.sh@19 -- # IFS=: 00:09:45.448 00:43:37 -- accel/accel.sh@19 -- # read -r var val 00:09:45.448 00:43:37 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:45.448 00:43:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:09:45.448 00:43:37 -- accel/accel.sh@12 -- # build_accel_config 00:09:45.448 00:43:37 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:45.448 00:43:37 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:45.448 00:43:37 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:45.448 00:43:37 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:45.448 00:43:37 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:45.448 00:43:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:45.448 00:43:37 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:45.448 00:43:37 -- accel/accel.sh@40 -- # local IFS=, 00:09:45.448 00:43:37 -- accel/accel.sh@41 -- # jq -r . 00:09:45.448 [2024-04-27 00:43:37.945485] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:45.448 [2024-04-27 00:43:37.945592] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2610175 ] 00:09:45.448 EAL: No free 2048 kB hugepages reported on node 1 00:09:45.448 [2024-04-27 00:43:38.066698] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.708 [2024-04-27 00:43:38.160457] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.708 [2024-04-27 00:43:38.164985] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:45.708 [2024-04-27 00:43:38.172948] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:09:52.331 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.331 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.331 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.331 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.331 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.331 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.331 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.331 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.331 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.331 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.331 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=0x1 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=decompress 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@23 -- # accel_opc=decompress 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val='111250 bytes' 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=iaa 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@22 -- # accel_module=iaa 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/bib 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=32 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=32 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=2 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val='1 seconds' 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val=Yes 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:52.332 00:43:44 -- accel/accel.sh@20 -- # val= 00:09:52.332 00:43:44 -- accel/accel.sh@21 -- # case "$var" in 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # IFS=: 00:09:52.332 00:43:44 -- accel/accel.sh@19 -- # read -r var val 00:09:55.620 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.620 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.620 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.620 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.620 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.620 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.620 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.621 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.621 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.621 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.621 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.621 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.621 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.621 00:43:47 -- accel/accel.sh@20 -- # val= 00:09:55.621 00:43:47 -- accel/accel.sh@21 -- # case "$var" in 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # IFS=: 00:09:55.621 00:43:47 -- accel/accel.sh@19 -- # read -r var val 00:09:55.621 00:43:47 -- accel/accel.sh@27 -- # [[ -n iaa ]] 00:09:55.621 00:43:47 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:09:55.621 00:43:47 -- accel/accel.sh@27 -- # [[ iaa == \i\a\a ]] 00:09:55.621 00:09:55.621 real 0m9.709s 00:09:55.621 user 0m3.303s 00:09:55.621 sys 0m0.235s 00:09:55.621 00:43:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:09:55.621 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:55.621 ************************************ 00:09:55.621 END TEST accel_deomp_full_mthread 00:09:55.621 ************************************ 00:09:55.621 00:43:47 -- accel/accel.sh@124 -- # [[ n == y ]] 00:09:55.621 00:43:47 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:55.621 00:43:47 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:55.621 00:43:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:55.621 00:43:47 -- common/autotest_common.sh@10 -- # set +x 00:09:55.621 00:43:47 -- accel/accel.sh@137 -- # build_accel_config 00:09:55.621 00:43:47 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:09:55.621 00:43:47 -- accel/accel.sh@32 -- # [[ 1 -gt 0 ]] 00:09:55.621 00:43:47 -- accel/accel.sh@32 -- # accel_json_cfg+=('{"method": "dsa_scan_accel_module"}') 00:09:55.621 00:43:47 -- accel/accel.sh@33 -- # [[ 1 -gt 0 ]] 00:09:55.621 00:43:47 -- accel/accel.sh@33 -- # accel_json_cfg+=('{"method": "iaa_scan_accel_module"}') 00:09:55.621 00:43:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:09:55.621 00:43:47 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:09:55.621 00:43:47 -- accel/accel.sh@40 -- # local IFS=, 00:09:55.621 00:43:47 -- accel/accel.sh@41 -- # jq -r . 00:09:55.621 ************************************ 00:09:55.621 START TEST accel_dif_functional_tests 00:09:55.621 ************************************ 00:09:55.621 00:43:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:09:55.621 [2024-04-27 00:43:47.797384] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:09:55.621 [2024-04-27 00:43:47.797479] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2612238 ] 00:09:55.621 EAL: No free 2048 kB hugepages reported on node 1 00:09:55.621 [2024-04-27 00:43:47.913820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:55.621 [2024-04-27 00:43:48.006486] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:55.621 [2024-04-27 00:43:48.006568] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.621 [2024-04-27 00:43:48.006573] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:55.621 [2024-04-27 00:43:48.011150] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:09:55.621 [2024-04-27 00:43:48.019116] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:05.609 00:10:05.609 00:10:05.609 CUnit - A unit testing framework for C - Version 2.1-3 00:10:05.609 http://cunit.sourceforge.net/ 00:10:05.609 00:10:05.609 00:10:05.609 Suite: accel_dif 00:10:05.609 Test: verify: DIF generated, GUARD check ...passed 00:10:05.609 Test: verify: DIF generated, APPTAG check ...passed 00:10:05.609 Test: verify: DIF generated, REFTAG check ...passed 00:10:05.609 Test: verify: DIF not generated, GUARD check ...[2024-04-27 00:43:56.668965] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.609 [2024-04-27 00:43:56.669010] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669022] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669031] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669038] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669046] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669052] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.609 [2024-04-27 00:43:56.669061] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.609 [2024-04-27 00:43:56.669068] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.609 [2024-04-27 00:43:56.669091] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:05.609 [2024-04-27 00:43:56.669103] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=4, offset=0 00:10:05.609 [2024-04-27 00:43:56.669125] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:10:05.609 passed 00:10:05.609 Test: verify: DIF not generated, APPTAG check ...[2024-04-27 00:43:56.669186] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.609 [2024-04-27 00:43:56.669196] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669205] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669212] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669222] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669230] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669238] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.609 [2024-04-27 00:43:56.669244] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.609 [2024-04-27 00:43:56.669252] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.609 [2024-04-27 00:43:56.669260] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:05.609 [2024-04-27 00:43:56.669270] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:10:05.609 [2024-04-27 00:43:56.669290] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:10:05.609 passed 00:10:05.609 Test: verify: DIF not generated, REFTAG check ...[2024-04-27 00:43:56.669324] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.609 [2024-04-27 00:43:56.669334] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669340] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669347] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669353] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669360] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.609 [2024-04-27 00:43:56.669366] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.609 [2024-04-27 00:43:56.669378] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.609 [2024-04-27 00:43:56.669384] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.609 [2024-04-27 00:43:56.669395] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:05.609 [2024-04-27 00:43:56.669403] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:10:05.609 [2024-04-27 00:43:56.669420] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:10:05.609 passed 00:10:05.609 Test: verify: APPTAG correct, APPTAG check ...passed 00:10:05.610 Test: verify: APPTAG incorrect, APPTAG check ...[2024-04-27 00:43:56.669494] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.610 [2024-04-27 00:43:56.669503] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669511] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669517] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669525] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669531] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669539] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.610 [2024-04-27 00:43:56.669545] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.610 [2024-04-27 00:43:56.669554] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.610 [2024-04-27 00:43:56.669563] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:10:05.610 [2024-04-27 00:43:56.669571] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=2, offset=0 00:10:05.610 passed 00:10:05.610 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:10:05.610 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:10:05.610 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:10:05.610 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-27 00:43:56.669728] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.610 [2024-04-27 00:43:56.669739] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669746] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669753] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669759] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669767] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669776] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.610 [2024-04-27 00:43:56.669785] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.610 [2024-04-27 00:43:56.669791] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.610 [2024-04-27 00:43:56.669799] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x9 00:10:05.610 [2024-04-27 00:43:56.669805] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw:[2024-04-27 00:43:56.669813] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669819] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669827] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669832] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669840] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.610 [2024-04-27 00:43:56.669846] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.610 [2024-04-27 00:43:56.669854] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.610 [2024-04-27 00:43:56.669862] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:10:05.610 [2024-04-27 00:43:56.669874] accel_dsa.c: 127:dsa_done: *ERROR*: DIF error detected. type=1, offset=0 00:10:05.610 [2024-04-27 00:43:56.669883] idxd.c:1812:spdk_idxd_process_events: *ERROR*: Completion status 0x5 00:10:05.610 passed[2024-04-27 00:43:56.669892] idxd_user.c: 428:user_idxd_dump_sw_err: *NOTICE*: SW Error Raw: 00:10:05.610 Test: generate copy: DIF generated, GUARD check ...[2024-04-27 00:43:56.669899] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669907] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669913] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669920] idxd_user.c: 431:user_idxd_dump_sw_err: *NOTICE*: 0x0 00:10:05.610 [2024-04-27 00:43:56.669926] idxd_user.c: 434:user_idxd_dump_sw_err: *NOTICE*: SW Error error code: 0 00:10:05.610 [2024-04-27 00:43:56.669935] idxd_user.c: 435:user_idxd_dump_sw_err: *NOTICE*: SW Error WQ index: 0 00:10:05.610 [2024-04-27 00:43:56.669941] idxd_user.c: 436:user_idxd_dump_sw_err: *NOTICE*: SW Error Operation: 0 00:10:05.610 passed 00:10:05.610 Test: generate copy: DIF generated, APTTAG check ...passed 00:10:05.610 Test: generate copy: DIF generated, REFTAG check ...passed 00:10:05.610 Test: generate copy: DIF generated, no GUARD check flag set ...[2024-04-27 00:43:56.670077] idxd.c:1571:idxd_validate_dif_insert_params: *ERROR*: Guard check flag must be set. 00:10:05.610 passed 00:10:05.610 Test: generate copy: DIF generated, no APPTAG check flag set ...[2024-04-27 00:43:56.670113] idxd.c:1576:idxd_validate_dif_insert_params: *ERROR*: Application Tag check flag must be set. 00:10:05.610 passed 00:10:05.610 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-04-27 00:43:56.670152] idxd.c:1581:idxd_validate_dif_insert_params: *ERROR*: Reference Tag check flag must be set. 00:10:05.610 passed 00:10:05.610 Test: generate copy: iovecs-len validate ...[2024-04-27 00:43:56.670187] idxd.c:1608:idxd_validate_dif_insert_iovecs: *ERROR*: Invalid length of data in src (4096) and dst (4176) in iovecs[0]. 00:10:05.610 passed 00:10:05.610 Test: generate copy: buffer alignment validate ...passed 00:10:05.610 00:10:05.610 Run Summary: Type Total Ran Passed Failed Inactive 00:10:05.610 suites 1 1 n/a 0 0 00:10:05.610 tests 20 20 20 0 0 00:10:05.610 asserts 204 204 204 0 n/a 00:10:05.610 00:10:05.610 Elapsed time = 0.005 seconds 00:10:07.517 00:10:07.517 real 0m12.139s 00:10:07.517 user 0m23.560s 00:10:07.517 sys 0m0.264s 00:10:07.517 00:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:07.517 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.517 ************************************ 00:10:07.517 END TEST accel_dif_functional_tests 00:10:07.517 ************************************ 00:10:07.517 00:10:07.517 real 3m59.573s 00:10:07.517 user 2m34.994s 00:10:07.517 sys 0m8.114s 00:10:07.517 00:43:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:07.517 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.517 ************************************ 00:10:07.517 END TEST accel 00:10:07.517 ************************************ 00:10:07.517 00:43:59 -- spdk/autotest.sh@180 -- # run_test accel_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:07.517 00:43:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:07.517 00:43:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:07.517 00:43:59 -- common/autotest_common.sh@10 -- # set +x 00:10:07.517 ************************************ 00:10:07.517 START TEST accel_rpc 00:10:07.517 ************************************ 00:10:07.517 00:44:00 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel/accel_rpc.sh 00:10:07.517 * Looking for test storage... 00:10:07.517 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/accel 00:10:07.517 00:44:00 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:10:07.517 00:44:00 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=2614687 00:10:07.517 00:44:00 -- accel/accel_rpc.sh@15 -- # waitforlisten 2614687 00:10:07.517 00:44:00 -- common/autotest_common.sh@817 -- # '[' -z 2614687 ']' 00:10:07.517 00:44:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:07.517 00:44:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:07.517 00:44:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:07.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:07.517 00:44:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:07.517 00:44:00 -- common/autotest_common.sh@10 -- # set +x 00:10:07.517 00:44:00 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:10:07.517 [2024-04-27 00:44:00.174528] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:07.517 [2024-04-27 00:44:00.174640] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2614687 ] 00:10:07.775 EAL: No free 2048 kB hugepages reported on node 1 00:10:07.775 [2024-04-27 00:44:00.286442] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:07.775 [2024-04-27 00:44:00.379536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:08.346 00:44:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:08.346 00:44:00 -- common/autotest_common.sh@850 -- # return 0 00:10:08.346 00:44:00 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:10:08.346 00:44:00 -- accel/accel_rpc.sh@45 -- # [[ 1 -gt 0 ]] 00:10:08.346 00:44:00 -- accel/accel_rpc.sh@46 -- # run_test accel_scan_dsa_modules accel_scan_dsa_modules_test_suite 00:10:08.346 00:44:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.346 00:44:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.346 00:44:00 -- common/autotest_common.sh@10 -- # set +x 00:10:08.346 ************************************ 00:10:08.346 START TEST accel_scan_dsa_modules 00:10:08.346 ************************************ 00:10:08.346 00:44:01 -- common/autotest_common.sh@1111 -- # accel_scan_dsa_modules_test_suite 00:10:08.346 00:44:01 -- accel/accel_rpc.sh@21 -- # rpc_cmd dsa_scan_accel_module 00:10:08.346 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.346 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.346 [2024-04-27 00:44:01.024043] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:10:08.346 00:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.346 00:44:01 -- accel/accel_rpc.sh@22 -- # NOT rpc_cmd dsa_scan_accel_module 00:10:08.346 00:44:01 -- common/autotest_common.sh@638 -- # local es=0 00:10:08.346 00:44:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd dsa_scan_accel_module 00:10:08.346 00:44:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:10:08.346 00:44:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:08.346 00:44:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:10:08.346 00:44:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:08.346 00:44:01 -- common/autotest_common.sh@641 -- # rpc_cmd dsa_scan_accel_module 00:10:08.346 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.346 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.346 request: 00:10:08.346 { 00:10:08.346 "method": "dsa_scan_accel_module", 00:10:08.346 "req_id": 1 00:10:08.346 } 00:10:08.346 Got JSON-RPC error response 00:10:08.346 response: 00:10:08.346 { 00:10:08.346 "code": -114, 00:10:08.346 "message": "Operation already in progress" 00:10:08.346 } 00:10:08.346 00:44:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:08.346 00:44:01 -- common/autotest_common.sh@641 -- # es=1 00:10:08.346 00:44:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:08.346 00:44:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:08.346 00:44:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:08.346 00:10:08.346 real 0m0.016s 00:10:08.346 user 0m0.004s 00:10:08.346 sys 0m0.000s 00:10:08.346 00:44:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:08.346 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.346 ************************************ 00:10:08.346 END TEST accel_scan_dsa_modules 00:10:08.346 ************************************ 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@49 -- # [[ 1 -gt 0 ]] 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@50 -- # run_test accel_scan_iaa_modules accel_scan_iaa_modules_test_suite 00:10:08.607 00:44:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.607 00:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.607 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.607 ************************************ 00:10:08.607 START TEST accel_scan_iaa_modules 00:10:08.607 ************************************ 00:10:08.607 00:44:01 -- common/autotest_common.sh@1111 -- # accel_scan_iaa_modules_test_suite 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@29 -- # rpc_cmd iaa_scan_accel_module 00:10:08.607 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.607 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.607 [2024-04-27 00:44:01.176066] accel_iaa_rpc.c: 33:rpc_iaa_scan_accel_module: *NOTICE*: Enabled IAA user-mode 00:10:08.607 00:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@30 -- # NOT rpc_cmd iaa_scan_accel_module 00:10:08.607 00:44:01 -- common/autotest_common.sh@638 -- # local es=0 00:10:08.607 00:44:01 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd iaa_scan_accel_module 00:10:08.607 00:44:01 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:10:08.607 00:44:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:08.607 00:44:01 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:10:08.607 00:44:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:08.607 00:44:01 -- common/autotest_common.sh@641 -- # rpc_cmd iaa_scan_accel_module 00:10:08.607 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.607 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.607 request: 00:10:08.607 { 00:10:08.607 "method": "iaa_scan_accel_module", 00:10:08.607 "req_id": 1 00:10:08.607 } 00:10:08.607 Got JSON-RPC error response 00:10:08.607 response: 00:10:08.607 { 00:10:08.607 "code": -114, 00:10:08.607 "message": "Operation already in progress" 00:10:08.607 } 00:10:08.607 00:44:01 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:10:08.607 00:44:01 -- common/autotest_common.sh@641 -- # es=1 00:10:08.607 00:44:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:08.607 00:44:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:08.607 00:44:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:08.607 00:10:08.607 real 0m0.024s 00:10:08.607 user 0m0.006s 00:10:08.607 sys 0m0.001s 00:10:08.607 00:44:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:08.607 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.607 ************************************ 00:10:08.607 END TEST accel_scan_iaa_modules 00:10:08.607 ************************************ 00:10:08.607 00:44:01 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:10:08.607 00:44:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:08.607 00:44:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:08.607 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 ************************************ 00:10:08.866 START TEST accel_assign_opcode 00:10:08.866 ************************************ 00:10:08.866 00:44:01 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:10:08.866 00:44:01 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:10:08.866 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.866 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 [2024-04-27 00:44:01.332098] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:10:08.866 00:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.866 00:44:01 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:10:08.866 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.866 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:08.866 [2024-04-27 00:44:01.344077] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:10:08.866 00:44:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:08.866 00:44:01 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:10:08.866 00:44:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:08.866 00:44:01 -- common/autotest_common.sh@10 -- # set +x 00:10:18.857 00:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.857 00:44:10 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:10:18.857 00:44:10 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:10:18.857 00:44:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:18.857 00:44:10 -- common/autotest_common.sh@10 -- # set +x 00:10:18.857 00:44:10 -- accel/accel_rpc.sh@42 -- # grep software 00:10:18.857 00:44:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:18.857 software 00:10:18.857 00:10:18.857 real 0m8.913s 00:10:18.857 user 0m0.037s 00:10:18.857 sys 0m0.009s 00:10:18.857 00:44:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:18.857 00:44:10 -- common/autotest_common.sh@10 -- # set +x 00:10:18.857 ************************************ 00:10:18.857 END TEST accel_assign_opcode 00:10:18.857 ************************************ 00:10:18.857 00:44:10 -- accel/accel_rpc.sh@55 -- # killprocess 2614687 00:10:18.857 00:44:10 -- common/autotest_common.sh@936 -- # '[' -z 2614687 ']' 00:10:18.857 00:44:10 -- common/autotest_common.sh@940 -- # kill -0 2614687 00:10:18.857 00:44:10 -- common/autotest_common.sh@941 -- # uname 00:10:18.857 00:44:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:18.857 00:44:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2614687 00:10:18.857 00:44:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:18.857 00:44:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:18.857 00:44:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2614687' 00:10:18.857 killing process with pid 2614687 00:10:18.857 00:44:10 -- common/autotest_common.sh@955 -- # kill 2614687 00:10:18.857 00:44:10 -- common/autotest_common.sh@960 -- # wait 2614687 00:10:21.388 00:10:21.388 real 0m13.935s 00:10:21.388 user 0m4.476s 00:10:21.388 sys 0m0.770s 00:10:21.388 00:44:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:21.388 00:44:13 -- common/autotest_common.sh@10 -- # set +x 00:10:21.388 ************************************ 00:10:21.388 END TEST accel_rpc 00:10:21.388 ************************************ 00:10:21.388 00:44:13 -- spdk/autotest.sh@181 -- # run_test app_cmdline /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:10:21.388 00:44:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:21.388 00:44:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:21.388 00:44:13 -- common/autotest_common.sh@10 -- # set +x 00:10:21.649 ************************************ 00:10:21.649 START TEST app_cmdline 00:10:21.649 ************************************ 00:10:21.649 00:44:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/cmdline.sh 00:10:21.649 * Looking for test storage... 00:10:21.649 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:10:21.649 00:44:14 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:10:21.649 00:44:14 -- app/cmdline.sh@17 -- # spdk_tgt_pid=2617547 00:10:21.649 00:44:14 -- app/cmdline.sh@18 -- # waitforlisten 2617547 00:10:21.649 00:44:14 -- common/autotest_common.sh@817 -- # '[' -z 2617547 ']' 00:10:21.649 00:44:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.649 00:44:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:21.649 00:44:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.649 00:44:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:21.649 00:44:14 -- common/autotest_common.sh@10 -- # set +x 00:10:21.649 00:44:14 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:10:21.649 [2024-04-27 00:44:14.294479] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:21.649 [2024-04-27 00:44:14.294620] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2617547 ] 00:10:21.909 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.909 [2024-04-27 00:44:14.431604] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:21.909 [2024-04-27 00:44:14.523270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.477 00:44:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:22.477 00:44:15 -- common/autotest_common.sh@850 -- # return 0 00:10:22.477 00:44:15 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:10:22.477 { 00:10:22.477 "version": "SPDK v24.05-pre git sha1 d4fbb5733", 00:10:22.477 "fields": { 00:10:22.477 "major": 24, 00:10:22.477 "minor": 5, 00:10:22.477 "patch": 0, 00:10:22.477 "suffix": "-pre", 00:10:22.477 "commit": "d4fbb5733" 00:10:22.477 } 00:10:22.477 } 00:10:22.737 00:44:15 -- app/cmdline.sh@22 -- # expected_methods=() 00:10:22.737 00:44:15 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:10:22.737 00:44:15 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:10:22.737 00:44:15 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:10:22.737 00:44:15 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:10:22.737 00:44:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:22.737 00:44:15 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:10:22.737 00:44:15 -- common/autotest_common.sh@10 -- # set +x 00:10:22.737 00:44:15 -- app/cmdline.sh@26 -- # sort 00:10:22.737 00:44:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:22.737 00:44:15 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:10:22.737 00:44:15 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:10:22.737 00:44:15 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.737 00:44:15 -- common/autotest_common.sh@638 -- # local es=0 00:10:22.737 00:44:15 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.737 00:44:15 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:10:22.737 00:44:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.737 00:44:15 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:10:22.737 00:44:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.737 00:44:15 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:10:22.737 00:44:15 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:10:22.737 00:44:15 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:10:22.737 00:44:15 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:10:22.737 00:44:15 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:10:22.737 request: 00:10:22.737 { 00:10:22.737 "method": "env_dpdk_get_mem_stats", 00:10:22.737 "req_id": 1 00:10:22.737 } 00:10:22.737 Got JSON-RPC error response 00:10:22.737 response: 00:10:22.737 { 00:10:22.737 "code": -32601, 00:10:22.737 "message": "Method not found" 00:10:22.737 } 00:10:22.737 00:44:15 -- common/autotest_common.sh@641 -- # es=1 00:10:22.737 00:44:15 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:10:22.737 00:44:15 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:10:22.737 00:44:15 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:10:22.737 00:44:15 -- app/cmdline.sh@1 -- # killprocess 2617547 00:10:22.737 00:44:15 -- common/autotest_common.sh@936 -- # '[' -z 2617547 ']' 00:10:22.737 00:44:15 -- common/autotest_common.sh@940 -- # kill -0 2617547 00:10:22.737 00:44:15 -- common/autotest_common.sh@941 -- # uname 00:10:22.737 00:44:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:22.737 00:44:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2617547 00:10:22.737 00:44:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:10:22.737 00:44:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:10:22.737 00:44:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2617547' 00:10:22.737 killing process with pid 2617547 00:10:22.737 00:44:15 -- common/autotest_common.sh@955 -- # kill 2617547 00:10:22.737 00:44:15 -- common/autotest_common.sh@960 -- # wait 2617547 00:10:23.673 00:10:23.673 real 0m2.164s 00:10:23.673 user 0m2.331s 00:10:23.673 sys 0m0.525s 00:10:23.673 00:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.673 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.673 ************************************ 00:10:23.673 END TEST app_cmdline 00:10:23.673 ************************************ 00:10:23.673 00:44:16 -- spdk/autotest.sh@182 -- # run_test version /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:10:23.673 00:44:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:10:23.673 00:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.673 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.932 ************************************ 00:10:23.932 START TEST version 00:10:23.932 ************************************ 00:10:23.932 00:44:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/version.sh 00:10:23.932 * Looking for test storage... 00:10:23.933 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:10:23.933 00:44:16 -- app/version.sh@17 -- # get_header_version major 00:10:23.933 00:44:16 -- app/version.sh@14 -- # cut -f2 00:10:23.933 00:44:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:10:23.933 00:44:16 -- app/version.sh@14 -- # tr -d '"' 00:10:23.933 00:44:16 -- app/version.sh@17 -- # major=24 00:10:23.933 00:44:16 -- app/version.sh@18 -- # get_header_version minor 00:10:23.933 00:44:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:10:23.933 00:44:16 -- app/version.sh@14 -- # tr -d '"' 00:10:23.933 00:44:16 -- app/version.sh@14 -- # cut -f2 00:10:23.933 00:44:16 -- app/version.sh@18 -- # minor=5 00:10:23.933 00:44:16 -- app/version.sh@19 -- # get_header_version patch 00:10:23.933 00:44:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:10:23.933 00:44:16 -- app/version.sh@14 -- # cut -f2 00:10:23.933 00:44:16 -- app/version.sh@14 -- # tr -d '"' 00:10:23.933 00:44:16 -- app/version.sh@19 -- # patch=0 00:10:23.933 00:44:16 -- app/version.sh@20 -- # get_header_version suffix 00:10:23.933 00:44:16 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/version.h 00:10:23.933 00:44:16 -- app/version.sh@14 -- # cut -f2 00:10:23.933 00:44:16 -- app/version.sh@14 -- # tr -d '"' 00:10:23.933 00:44:16 -- app/version.sh@20 -- # suffix=-pre 00:10:23.933 00:44:16 -- app/version.sh@22 -- # version=24.5 00:10:23.933 00:44:16 -- app/version.sh@25 -- # (( patch != 0 )) 00:10:23.933 00:44:16 -- app/version.sh@28 -- # version=24.5rc0 00:10:23.933 00:44:16 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:10:23.933 00:44:16 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:10:23.933 00:44:16 -- app/version.sh@30 -- # py_version=24.5rc0 00:10:23.933 00:44:16 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:10:23.933 00:10:23.933 real 0m0.126s 00:10:23.933 user 0m0.061s 00:10:23.933 sys 0m0.094s 00:10:23.933 00:44:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:23.933 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.933 ************************************ 00:10:23.933 END TEST version 00:10:23.933 ************************************ 00:10:23.933 00:44:16 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@194 -- # uname -s 00:10:23.933 00:44:16 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:10:23.933 00:44:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:23.933 00:44:16 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:10:23.933 00:44:16 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@258 -- # timing_exit lib 00:10:23.933 00:44:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:23.933 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:23.933 00:44:16 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:10:23.933 00:44:16 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:10:23.933 00:44:16 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:23.933 00:44:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:23.933 00:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:23.933 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:24.193 ************************************ 00:10:24.193 START TEST nvmf_tcp 00:10:24.193 ************************************ 00:10:24.193 00:44:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:10:24.193 * Looking for test storage... 00:10:24.193 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf 00:10:24.193 00:44:16 -- nvmf/nvmf.sh@10 -- # uname -s 00:10:24.193 00:44:16 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:10:24.193 00:44:16 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.193 00:44:16 -- nvmf/common.sh@7 -- # uname -s 00:10:24.193 00:44:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.193 00:44:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.193 00:44:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.193 00:44:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.193 00:44:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.193 00:44:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.193 00:44:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.193 00:44:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.193 00:44:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.193 00:44:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.193 00:44:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:10:24.193 00:44:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:10:24.193 00:44:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.193 00:44:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.193 00:44:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:24.193 00:44:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.193 00:44:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:24.194 00:44:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.194 00:44:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.194 00:44:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.194 00:44:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.194 00:44:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.194 00:44:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.194 00:44:16 -- paths/export.sh@5 -- # export PATH 00:10:24.194 00:44:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.194 00:44:16 -- nvmf/common.sh@47 -- # : 0 00:10:24.194 00:44:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.194 00:44:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.194 00:44:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.194 00:44:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.194 00:44:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.194 00:44:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.194 00:44:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.194 00:44:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.194 00:44:16 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:24.194 00:44:16 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:10:24.194 00:44:16 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:10:24.194 00:44:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.194 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:24.194 00:44:16 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:10:24.194 00:44:16 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.194 00:44:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:24.194 00:44:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:24.194 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:24.194 ************************************ 00:10:24.194 START TEST nvmf_example 00:10:24.194 ************************************ 00:10:24.194 00:44:16 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:24.455 * Looking for test storage... 00:10:24.455 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:24.455 00:44:16 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:24.455 00:44:16 -- nvmf/common.sh@7 -- # uname -s 00:10:24.455 00:44:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:24.455 00:44:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:24.455 00:44:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:24.455 00:44:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:24.455 00:44:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:24.455 00:44:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:24.455 00:44:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:24.455 00:44:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:24.455 00:44:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:24.455 00:44:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:24.455 00:44:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:10:24.455 00:44:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:10:24.455 00:44:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:24.455 00:44:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:24.455 00:44:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:24.455 00:44:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:24.455 00:44:16 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:24.455 00:44:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:24.455 00:44:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:24.455 00:44:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:24.455 00:44:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.455 00:44:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.455 00:44:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.455 00:44:16 -- paths/export.sh@5 -- # export PATH 00:10:24.455 00:44:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:24.455 00:44:16 -- nvmf/common.sh@47 -- # : 0 00:10:24.455 00:44:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:24.455 00:44:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:24.455 00:44:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:24.455 00:44:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:24.455 00:44:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:24.455 00:44:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:24.455 00:44:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:24.455 00:44:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:24.455 00:44:16 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:24.455 00:44:16 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:24.455 00:44:16 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:24.455 00:44:16 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:24.455 00:44:16 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:24.455 00:44:16 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:24.455 00:44:16 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:24.455 00:44:16 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:24.455 00:44:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:24.455 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:24.455 00:44:16 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:24.455 00:44:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:24.455 00:44:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:24.455 00:44:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:24.455 00:44:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:24.455 00:44:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:24.455 00:44:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.455 00:44:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:24.455 00:44:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:24.455 00:44:16 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:10:24.455 00:44:16 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:24.455 00:44:16 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:24.455 00:44:16 -- common/autotest_common.sh@10 -- # set +x 00:10:29.808 00:44:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:29.808 00:44:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:29.808 00:44:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:29.808 00:44:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:29.808 00:44:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:29.808 00:44:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:29.808 00:44:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:29.808 00:44:22 -- nvmf/common.sh@295 -- # net_devs=() 00:10:29.808 00:44:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:29.808 00:44:22 -- nvmf/common.sh@296 -- # e810=() 00:10:29.808 00:44:22 -- nvmf/common.sh@296 -- # local -ga e810 00:10:29.808 00:44:22 -- nvmf/common.sh@297 -- # x722=() 00:10:29.808 00:44:22 -- nvmf/common.sh@297 -- # local -ga x722 00:10:29.808 00:44:22 -- nvmf/common.sh@298 -- # mlx=() 00:10:29.808 00:44:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:29.808 00:44:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:29.808 00:44:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:29.808 00:44:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.808 00:44:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:29.808 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:29.808 00:44:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:29.808 00:44:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:29.808 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:29.808 00:44:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.808 00:44:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.808 00:44:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.808 00:44:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:29.808 Found net devices under 0000:27:00.0: cvl_0_0 00:10:29.808 00:44:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.808 00:44:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:29.808 00:44:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:29.808 00:44:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:29.808 00:44:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:29.808 Found net devices under 0000:27:00.1: cvl_0_1 00:10:29.808 00:44:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:29.808 00:44:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:29.808 00:44:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:29.808 00:44:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:29.808 00:44:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:29.808 00:44:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:29.808 00:44:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:29.808 00:44:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:29.808 00:44:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:29.808 00:44:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:29.808 00:44:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:29.808 00:44:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:29.808 00:44:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:29.808 00:44:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:29.808 00:44:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:29.808 00:44:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:29.808 00:44:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:29.808 00:44:22 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:29.808 00:44:22 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:29.808 00:44:22 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:29.808 00:44:22 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:30.068 00:44:22 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:30.068 00:44:22 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:30.068 00:44:22 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:30.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:30.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.448 ms 00:10:30.068 00:10:30.068 --- 10.0.0.2 ping statistics --- 00:10:30.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.068 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:10:30.068 00:44:22 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:30.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:30.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:10:30.068 00:10:30.068 --- 10.0.0.1 ping statistics --- 00:10:30.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:30.068 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:10:30.068 00:44:22 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:30.068 00:44:22 -- nvmf/common.sh@411 -- # return 0 00:10:30.068 00:44:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:30.068 00:44:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:30.068 00:44:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:30.068 00:44:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:30.068 00:44:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:30.068 00:44:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:30.068 00:44:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:30.068 00:44:22 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:30.068 00:44:22 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:30.068 00:44:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:30.068 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.068 00:44:22 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:30.068 00:44:22 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:30.068 00:44:22 -- target/nvmf_example.sh@34 -- # nvmfpid=2621811 00:10:30.068 00:44:22 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:30.068 00:44:22 -- target/nvmf_example.sh@36 -- # waitforlisten 2621811 00:10:30.068 00:44:22 -- common/autotest_common.sh@817 -- # '[' -z 2621811 ']' 00:10:30.068 00:44:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.068 00:44:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:30.068 00:44:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.068 00:44:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:30.068 00:44:22 -- common/autotest_common.sh@10 -- # set +x 00:10:30.068 00:44:22 -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:30.068 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.009 00:44:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:31.009 00:44:23 -- common/autotest_common.sh@850 -- # return 0 00:10:31.009 00:44:23 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:31.009 00:44:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:31.009 00:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.009 00:44:23 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:31.009 00:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.009 00:44:23 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:31.009 00:44:23 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:31.009 00:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.009 00:44:23 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:31.009 00:44:23 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:31.009 00:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.009 00:44:23 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:31.009 00:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:31.009 00:44:23 -- common/autotest_common.sh@10 -- # set +x 00:10:31.009 00:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:31.009 00:44:23 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:31.009 00:44:23 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:31.009 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.229 Initializing NVMe Controllers 00:10:43.229 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:43.229 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:43.229 Initialization complete. Launching workers. 00:10:43.229 ======================================================== 00:10:43.229 Latency(us) 00:10:43.229 Device Information : IOPS MiB/s Average min max 00:10:43.229 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18426.10 71.98 3473.83 699.08 15580.71 00:10:43.229 ======================================================== 00:10:43.229 Total : 18426.10 71.98 3473.83 699.08 15580.71 00:10:43.229 00:10:43.229 00:44:33 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:43.229 00:44:33 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:43.229 00:44:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:10:43.229 00:44:33 -- nvmf/common.sh@117 -- # sync 00:10:43.229 00:44:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:43.229 00:44:33 -- nvmf/common.sh@120 -- # set +e 00:10:43.229 00:44:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:43.229 00:44:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:43.229 rmmod nvme_tcp 00:10:43.229 rmmod nvme_fabrics 00:10:43.229 rmmod nvme_keyring 00:10:43.229 00:44:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:43.229 00:44:33 -- nvmf/common.sh@124 -- # set -e 00:10:43.229 00:44:33 -- nvmf/common.sh@125 -- # return 0 00:10:43.229 00:44:33 -- nvmf/common.sh@478 -- # '[' -n 2621811 ']' 00:10:43.229 00:44:33 -- nvmf/common.sh@479 -- # killprocess 2621811 00:10:43.229 00:44:33 -- common/autotest_common.sh@936 -- # '[' -z 2621811 ']' 00:10:43.229 00:44:33 -- common/autotest_common.sh@940 -- # kill -0 2621811 00:10:43.229 00:44:33 -- common/autotest_common.sh@941 -- # uname 00:10:43.229 00:44:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:10:43.229 00:44:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2621811 00:10:43.229 00:44:33 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:10:43.229 00:44:33 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:10:43.229 00:44:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2621811' 00:10:43.229 killing process with pid 2621811 00:10:43.229 00:44:33 -- common/autotest_common.sh@955 -- # kill 2621811 00:10:43.229 00:44:33 -- common/autotest_common.sh@960 -- # wait 2621811 00:10:43.229 nvmf threads initialize successfully 00:10:43.229 bdev subsystem init successfully 00:10:43.229 created a nvmf target service 00:10:43.229 create targets's poll groups done 00:10:43.229 all subsystems of target started 00:10:43.229 nvmf target is running 00:10:43.229 all subsystems of target stopped 00:10:43.229 destroy targets's poll groups done 00:10:43.229 destroyed the nvmf target service 00:10:43.229 bdev subsystem finish successfully 00:10:43.229 nvmf threads destroy successfully 00:10:43.229 00:44:34 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:10:43.229 00:44:34 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:10:43.229 00:44:34 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:10:43.229 00:44:34 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.229 00:44:34 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.229 00:44:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.229 00:44:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:43.229 00:44:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:43.799 00:44:36 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:43.799 00:44:36 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:43.799 00:44:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:43.799 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:43.799 00:10:43.799 real 0m19.559s 00:10:43.799 user 0m46.621s 00:10:43.799 sys 0m5.258s 00:10:43.799 00:44:36 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:43.799 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:43.799 ************************************ 00:10:43.799 END TEST nvmf_example 00:10:43.799 ************************************ 00:10:43.799 00:44:36 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:43.799 00:44:36 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:43.799 00:44:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:43.799 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:44.063 ************************************ 00:10:44.063 START TEST nvmf_filesystem 00:10:44.063 ************************************ 00:10:44.063 00:44:36 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:44.063 * Looking for test storage... 00:10:44.063 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.063 00:44:36 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh 00:10:44.063 00:44:36 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:44.063 00:44:36 -- common/autotest_common.sh@34 -- # set -e 00:10:44.063 00:44:36 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:44.063 00:44:36 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:44.063 00:44:36 -- common/autotest_common.sh@38 -- # '[' -z /var/jenkins/workspace/dsa-phy-autotest/spdk/../output ']' 00:10:44.063 00:44:36 -- common/autotest_common.sh@43 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:44.063 00:44:36 -- common/autotest_common.sh@44 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/build_config.sh 00:10:44.063 00:44:36 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:10:44.063 00:44:36 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:44.063 00:44:36 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:44.063 00:44:36 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:44.063 00:44:36 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:44.063 00:44:36 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:44.063 00:44:36 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:44.063 00:44:36 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:44.063 00:44:36 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:44.063 00:44:36 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:44.063 00:44:36 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:44.063 00:44:36 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:44.063 00:44:36 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:44.063 00:44:36 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:44.063 00:44:36 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:10:44.063 00:44:36 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:44.063 00:44:36 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:10:44.063 00:44:36 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:10:44.063 00:44:36 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:10:44.063 00:44:36 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:10:44.063 00:44:36 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:44.063 00:44:36 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:10:44.063 00:44:36 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:10:44.063 00:44:36 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:44.063 00:44:36 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:44.063 00:44:36 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:10:44.063 00:44:36 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:10:44.063 00:44:36 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:10:44.063 00:44:36 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:10:44.063 00:44:36 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:10:44.063 00:44:36 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:10:44.063 00:44:36 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:10:44.063 00:44:36 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:10:44.063 00:44:36 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:10:44.063 00:44:36 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:10:44.063 00:44:36 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:10:44.063 00:44:36 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:10:44.063 00:44:36 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:10:44.063 00:44:36 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:44.063 00:44:36 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:10:44.063 00:44:36 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:10:44.063 00:44:36 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:10:44.063 00:44:36 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:44.063 00:44:36 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:10:44.063 00:44:36 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:10:44.063 00:44:36 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:10:44.063 00:44:36 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:10:44.063 00:44:36 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:10:44.063 00:44:36 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:10:44.063 00:44:36 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:10:44.063 00:44:36 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:10:44.063 00:44:36 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:10:44.063 00:44:36 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@59 -- # CONFIG_GOLANG=n 00:10:44.063 00:44:36 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:10:44.063 00:44:36 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:10:44.063 00:44:36 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:10:44.063 00:44:36 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:10:44.063 00:44:36 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:10:44.063 00:44:36 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:10:44.063 00:44:36 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:10:44.063 00:44:36 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:44.063 00:44:36 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:10:44.063 00:44:36 -- common/build_config.sh@70 -- # CONFIG_AVAHI=n 00:10:44.063 00:44:36 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:10:44.063 00:44:36 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:10:44.063 00:44:36 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:10:44.063 00:44:36 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:10:44.063 00:44:36 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:10:44.063 00:44:36 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:10:44.063 00:44:36 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:10:44.063 00:44:36 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:10:44.063 00:44:36 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:10:44.063 00:44:36 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:44.063 00:44:36 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:10:44.063 00:44:36 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:10:44.063 00:44:36 -- common/autotest_common.sh@53 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:10:44.063 00:44:36 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/applications.sh 00:10:44.063 00:44:36 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:10:44.063 00:44:36 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/common 00:10:44.063 00:44:36 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:10:44.063 00:44:36 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:44.063 00:44:36 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/app 00:10:44.063 00:44:36 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:44.063 00:44:36 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:44.063 00:44:36 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:44.063 00:44:36 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:44.063 00:44:36 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:44.063 00:44:36 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:44.063 00:44:36 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:44.063 00:44:36 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk/config.h ]] 00:10:44.063 00:44:36 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:44.063 #define SPDK_CONFIG_H 00:10:44.063 #define SPDK_CONFIG_APPS 1 00:10:44.063 #define SPDK_CONFIG_ARCH native 00:10:44.063 #define SPDK_CONFIG_ASAN 1 00:10:44.063 #undef SPDK_CONFIG_AVAHI 00:10:44.063 #undef SPDK_CONFIG_CET 00:10:44.063 #define SPDK_CONFIG_COVERAGE 1 00:10:44.063 #define SPDK_CONFIG_CROSS_PREFIX 00:10:44.063 #undef SPDK_CONFIG_CRYPTO 00:10:44.063 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:44.063 #undef SPDK_CONFIG_CUSTOMOCF 00:10:44.063 #undef SPDK_CONFIG_DAOS 00:10:44.063 #define SPDK_CONFIG_DAOS_DIR 00:10:44.063 #define SPDK_CONFIG_DEBUG 1 00:10:44.063 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:44.063 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build 00:10:44.063 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:44.063 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:44.063 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:44.063 #define SPDK_CONFIG_ENV /var/jenkins/workspace/dsa-phy-autotest/spdk/lib/env_dpdk 00:10:44.063 #define SPDK_CONFIG_EXAMPLES 1 00:10:44.063 #undef SPDK_CONFIG_FC 00:10:44.064 #define SPDK_CONFIG_FC_PATH 00:10:44.064 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:44.064 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:44.064 #undef SPDK_CONFIG_FUSE 00:10:44.064 #undef SPDK_CONFIG_FUZZER 00:10:44.064 #define SPDK_CONFIG_FUZZER_LIB 00:10:44.064 #undef SPDK_CONFIG_GOLANG 00:10:44.064 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:44.064 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:44.064 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:44.064 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:10:44.064 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:44.064 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:44.064 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:44.064 #define SPDK_CONFIG_IDXD 1 00:10:44.064 #undef SPDK_CONFIG_IDXD_KERNEL 00:10:44.064 #undef SPDK_CONFIG_IPSEC_MB 00:10:44.064 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:44.064 #define SPDK_CONFIG_ISAL 1 00:10:44.064 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:44.064 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:44.064 #define SPDK_CONFIG_LIBDIR 00:10:44.064 #undef SPDK_CONFIG_LTO 00:10:44.064 #define SPDK_CONFIG_MAX_LCORES 00:10:44.064 #define SPDK_CONFIG_NVME_CUSE 1 00:10:44.064 #undef SPDK_CONFIG_OCF 00:10:44.064 #define SPDK_CONFIG_OCF_PATH 00:10:44.064 #define SPDK_CONFIG_OPENSSL_PATH 00:10:44.064 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:44.064 #define SPDK_CONFIG_PGO_DIR 00:10:44.064 #undef SPDK_CONFIG_PGO_USE 00:10:44.064 #define SPDK_CONFIG_PREFIX /usr/local 00:10:44.064 #undef SPDK_CONFIG_RAID5F 00:10:44.064 #undef SPDK_CONFIG_RBD 00:10:44.064 #define SPDK_CONFIG_RDMA 1 00:10:44.064 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:44.064 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:44.064 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:44.064 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:44.064 #define SPDK_CONFIG_SHARED 1 00:10:44.064 #undef SPDK_CONFIG_SMA 00:10:44.064 #define SPDK_CONFIG_TESTS 1 00:10:44.064 #undef SPDK_CONFIG_TSAN 00:10:44.064 #define SPDK_CONFIG_UBLK 1 00:10:44.064 #define SPDK_CONFIG_UBSAN 1 00:10:44.064 #undef SPDK_CONFIG_UNIT_TESTS 00:10:44.064 #undef SPDK_CONFIG_URING 00:10:44.064 #define SPDK_CONFIG_URING_PATH 00:10:44.064 #undef SPDK_CONFIG_URING_ZNS 00:10:44.064 #undef SPDK_CONFIG_USDT 00:10:44.064 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:44.064 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:44.064 #undef SPDK_CONFIG_VFIO_USER 00:10:44.064 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:44.064 #define SPDK_CONFIG_VHOST 1 00:10:44.064 #define SPDK_CONFIG_VIRTIO 1 00:10:44.064 #undef SPDK_CONFIG_VTUNE 00:10:44.064 #define SPDK_CONFIG_VTUNE_DIR 00:10:44.064 #define SPDK_CONFIG_WERROR 1 00:10:44.064 #define SPDK_CONFIG_WPDK_DIR 00:10:44.064 #undef SPDK_CONFIG_XNVME 00:10:44.064 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:44.064 00:44:36 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:44.064 00:44:36 -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:44.064 00:44:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.064 00:44:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.064 00:44:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.064 00:44:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.064 00:44:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.064 00:44:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.064 00:44:36 -- paths/export.sh@5 -- # export PATH 00:10:44.064 00:44:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.064 00:44:36 -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:10:44.064 00:44:36 -- pm/common@6 -- # dirname /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/common 00:10:44.064 00:44:36 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:10:44.064 00:44:36 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm 00:10:44.064 00:44:36 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:44.064 00:44:36 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/dsa-phy-autotest/spdk 00:10:44.064 00:44:36 -- pm/common@67 -- # TEST_TAG=N/A 00:10:44.064 00:44:36 -- pm/common@68 -- # TEST_TAG_FILE=/var/jenkins/workspace/dsa-phy-autotest/spdk/.run_test_name 00:10:44.064 00:44:36 -- pm/common@70 -- # PM_OUTPUTDIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:10:44.064 00:44:36 -- pm/common@71 -- # uname -s 00:10:44.064 00:44:36 -- pm/common@71 -- # PM_OS=Linux 00:10:44.064 00:44:36 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:44.064 00:44:36 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:10:44.064 00:44:36 -- pm/common@76 -- # [[ Linux == Linux ]] 00:10:44.064 00:44:36 -- pm/common@76 -- # [[ ............................... != QEMU ]] 00:10:44.064 00:44:36 -- pm/common@76 -- # [[ ! -e /.dockerenv ]] 00:10:44.064 00:44:36 -- pm/common@79 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:44.064 00:44:36 -- pm/common@80 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:44.064 00:44:36 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:10:44.064 00:44:36 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:10:44.064 00:44:36 -- pm/common@85 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power 00:10:44.064 00:44:36 -- common/autotest_common.sh@57 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:10:44.064 00:44:36 -- common/autotest_common.sh@61 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:44.064 00:44:36 -- common/autotest_common.sh@63 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:10:44.064 00:44:36 -- common/autotest_common.sh@65 -- # : 1 00:10:44.064 00:44:36 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:44.064 00:44:36 -- common/autotest_common.sh@67 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:10:44.064 00:44:36 -- common/autotest_common.sh@69 -- # : 00:10:44.064 00:44:36 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:10:44.064 00:44:36 -- common/autotest_common.sh@71 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:10:44.064 00:44:36 -- common/autotest_common.sh@73 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:10:44.064 00:44:36 -- common/autotest_common.sh@75 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:10:44.064 00:44:36 -- common/autotest_common.sh@77 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:44.064 00:44:36 -- common/autotest_common.sh@79 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:10:44.064 00:44:36 -- common/autotest_common.sh@81 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:10:44.064 00:44:36 -- common/autotest_common.sh@83 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:10:44.064 00:44:36 -- common/autotest_common.sh@85 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:10:44.064 00:44:36 -- common/autotest_common.sh@87 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:10:44.064 00:44:36 -- common/autotest_common.sh@89 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:10:44.064 00:44:36 -- common/autotest_common.sh@91 -- # : 1 00:10:44.064 00:44:36 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:10:44.064 00:44:36 -- common/autotest_common.sh@93 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:10:44.064 00:44:36 -- common/autotest_common.sh@95 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:44.064 00:44:36 -- common/autotest_common.sh@97 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:10:44.064 00:44:36 -- common/autotest_common.sh@99 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:10:44.064 00:44:36 -- common/autotest_common.sh@101 -- # : tcp 00:10:44.064 00:44:36 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:44.064 00:44:36 -- common/autotest_common.sh@103 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:10:44.064 00:44:36 -- common/autotest_common.sh@105 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:10:44.064 00:44:36 -- common/autotest_common.sh@107 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:10:44.064 00:44:36 -- common/autotest_common.sh@109 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:10:44.064 00:44:36 -- common/autotest_common.sh@111 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:10:44.064 00:44:36 -- common/autotest_common.sh@113 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:10:44.064 00:44:36 -- common/autotest_common.sh@115 -- # : 0 00:10:44.064 00:44:36 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:10:44.064 00:44:36 -- common/autotest_common.sh@117 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:44.065 00:44:36 -- common/autotest_common.sh@119 -- # : 1 00:10:44.065 00:44:36 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:10:44.065 00:44:36 -- common/autotest_common.sh@121 -- # : 1 00:10:44.065 00:44:36 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:10:44.065 00:44:36 -- common/autotest_common.sh@123 -- # : 00:10:44.065 00:44:36 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:44.065 00:44:36 -- common/autotest_common.sh@125 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:10:44.065 00:44:36 -- common/autotest_common.sh@127 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:10:44.065 00:44:36 -- common/autotest_common.sh@129 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:10:44.065 00:44:36 -- common/autotest_common.sh@131 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:10:44.065 00:44:36 -- common/autotest_common.sh@133 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:10:44.065 00:44:36 -- common/autotest_common.sh@135 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:10:44.065 00:44:36 -- common/autotest_common.sh@137 -- # : 00:10:44.065 00:44:36 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:10:44.065 00:44:36 -- common/autotest_common.sh@139 -- # : true 00:10:44.065 00:44:36 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:10:44.065 00:44:36 -- common/autotest_common.sh@141 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:10:44.065 00:44:36 -- common/autotest_common.sh@143 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:10:44.065 00:44:36 -- common/autotest_common.sh@145 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:10:44.065 00:44:36 -- common/autotest_common.sh@147 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:10:44.065 00:44:36 -- common/autotest_common.sh@149 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:10:44.065 00:44:36 -- common/autotest_common.sh@151 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:10:44.065 00:44:36 -- common/autotest_common.sh@153 -- # : 00:10:44.065 00:44:36 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:10:44.065 00:44:36 -- common/autotest_common.sh@155 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:10:44.065 00:44:36 -- common/autotest_common.sh@157 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:10:44.065 00:44:36 -- common/autotest_common.sh@159 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:10:44.065 00:44:36 -- common/autotest_common.sh@161 -- # : 1 00:10:44.065 00:44:36 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:10:44.065 00:44:36 -- common/autotest_common.sh@163 -- # : 1 00:10:44.065 00:44:36 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:10:44.065 00:44:36 -- common/autotest_common.sh@166 -- # : 00:10:44.065 00:44:36 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:10:44.065 00:44:36 -- common/autotest_common.sh@168 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:10:44.065 00:44:36 -- common/autotest_common.sh@170 -- # : 0 00:10:44.065 00:44:36 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:44.065 00:44:36 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/dsa-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:44.065 00:44:36 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:10:44.065 00:44:36 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python:/var/jenkins/workspace/dsa-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/dsa-phy-autotest/spdk/python 00:10:44.065 00:44:36 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:44.065 00:44:36 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:10:44.065 00:44:36 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:44.065 00:44:36 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:44.065 00:44:36 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:44.065 00:44:36 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:44.065 00:44:36 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:44.065 00:44:36 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:10:44.065 00:44:36 -- common/autotest_common.sh@199 -- # cat 00:10:44.065 00:44:36 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:10:44.065 00:44:36 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:44.065 00:44:36 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:44.065 00:44:36 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:44.065 00:44:36 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:44.065 00:44:36 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:10:44.065 00:44:36 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:10:44.065 00:44:36 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:44.065 00:44:36 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin 00:10:44.065 00:44:36 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:44.065 00:44:36 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples 00:10:44.065 00:44:36 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.065 00:44:36 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:44.065 00:44:36 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.065 00:44:36 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:44.065 00:44:36 -- common/autotest_common.sh@245 -- # export AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:44.065 00:44:36 -- common/autotest_common.sh@245 -- # AR_TOOL=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:44.065 00:44:36 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:10:44.065 00:44:36 -- common/autotest_common.sh@252 -- # export valgrind= 00:10:44.065 00:44:36 -- common/autotest_common.sh@252 -- # valgrind= 00:10:44.065 00:44:36 -- common/autotest_common.sh@258 -- # uname -s 00:10:44.065 00:44:36 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:10:44.065 00:44:36 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:10:44.065 00:44:36 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:10:44.065 00:44:36 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:44.065 00:44:36 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:10:44.065 00:44:36 -- common/autotest_common.sh@268 -- # MAKE=make 00:10:44.065 00:44:36 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j128 00:10:44.065 00:44:36 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:10:44.065 00:44:36 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:10:44.065 00:44:36 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:10:44.065 00:44:36 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:10:44.065 00:44:36 -- common/autotest_common.sh@289 -- # for i in "$@" 00:10:44.065 00:44:36 -- common/autotest_common.sh@290 -- # case "$i" in 00:10:44.065 00:44:36 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:10:44.065 00:44:36 -- common/autotest_common.sh@307 -- # [[ -z 2624621 ]] 00:10:44.065 00:44:36 -- common/autotest_common.sh@307 -- # kill -0 2624621 00:10:44.065 00:44:36 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:10:44.065 00:44:36 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:10:44.065 00:44:36 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:10:44.065 00:44:36 -- common/autotest_common.sh@320 -- # local mount target_dir 00:10:44.066 00:44:36 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:10:44.066 00:44:36 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:10:44.066 00:44:36 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:10:44.066 00:44:36 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:10:44.066 00:44:36 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.FVVA4D 00:10:44.066 00:44:36 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:44.066 00:44:36 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@344 -- # mkdir -p /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target /tmp/spdk.FVVA4D/tests/target /tmp/spdk.FVVA4D 00:10:44.066 00:44:36 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@316 -- # df -T 00:10:44.066 00:44:36 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_devtmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=67108864 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=67108864 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/pmem0 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext2 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=1052192768 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=5284429824 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=4232237056 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=spdk_root 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=overlay 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=259008094208 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=264763838464 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=5755744256 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=132379303936 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=132381917184 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=2613248 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=52943085568 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=52952768512 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=9682944 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=efivarfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=efivarfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=200704 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=507904 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=303104 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=132381704192 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=132381921280 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=217088 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # avails["$mount"]=26476376064 00:10:44.066 00:44:36 -- common/autotest_common.sh@351 -- # sizes["$mount"]=26476380160 00:10:44.066 00:44:36 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:10:44.066 00:44:36 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:10:44.066 00:44:36 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:10:44.066 * Looking for test storage... 00:10:44.066 00:44:36 -- common/autotest_common.sh@357 -- # local target_space new_size 00:10:44.066 00:44:36 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:10:44.066 00:44:36 -- common/autotest_common.sh@361 -- # df /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.066 00:44:36 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:44.066 00:44:36 -- common/autotest_common.sh@361 -- # mount=/ 00:10:44.066 00:44:36 -- common/autotest_common.sh@363 -- # target_space=259008094208 00:10:44.066 00:44:36 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:10:44.066 00:44:36 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:10:44.066 00:44:36 -- common/autotest_common.sh@369 -- # [[ overlay == tmpfs ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@369 -- # [[ overlay == ramfs ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@369 -- # [[ / == / ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@370 -- # new_size=7970336768 00:10:44.066 00:44:36 -- common/autotest_common.sh@371 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:44.066 00:44:36 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.066 00:44:36 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.066 00:44:36 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.066 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:10:44.066 00:44:36 -- common/autotest_common.sh@378 -- # return 0 00:10:44.066 00:44:36 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:10:44.066 00:44:36 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:10:44.066 00:44:36 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:44.066 00:44:36 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:44.066 00:44:36 -- common/autotest_common.sh@1673 -- # true 00:10:44.066 00:44:36 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:10:44.066 00:44:36 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:10:44.066 00:44:36 -- common/autotest_common.sh@27 -- # exec 00:10:44.066 00:44:36 -- common/autotest_common.sh@29 -- # exec 00:10:44.066 00:44:36 -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:44.066 00:44:36 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:44.066 00:44:36 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:44.066 00:44:36 -- common/autotest_common.sh@18 -- # set -x 00:10:44.066 00:44:36 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:10:44.066 00:44:36 -- nvmf/common.sh@7 -- # uname -s 00:10:44.066 00:44:36 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:44.066 00:44:36 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:44.066 00:44:36 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:44.066 00:44:36 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:44.066 00:44:36 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:44.066 00:44:36 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:44.066 00:44:36 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:44.066 00:44:36 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:44.066 00:44:36 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:44.066 00:44:36 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:44.066 00:44:36 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:10:44.066 00:44:36 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:10:44.066 00:44:36 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:44.066 00:44:36 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:44.066 00:44:36 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:10:44.066 00:44:36 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:44.066 00:44:36 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:10:44.066 00:44:36 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:44.066 00:44:36 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:44.066 00:44:36 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:44.066 00:44:36 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 00:44:36 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 00:44:36 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.066 00:44:36 -- paths/export.sh@5 -- # export PATH 00:10:44.067 00:44:36 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:44.067 00:44:36 -- nvmf/common.sh@47 -- # : 0 00:10:44.067 00:44:36 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:44.067 00:44:36 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:44.067 00:44:36 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:44.067 00:44:36 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:44.067 00:44:36 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:44.067 00:44:36 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:44.067 00:44:36 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:44.067 00:44:36 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:44.067 00:44:36 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:44.067 00:44:36 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:44.067 00:44:36 -- target/filesystem.sh@15 -- # nvmftestinit 00:10:44.067 00:44:36 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:10:44.067 00:44:36 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:44.067 00:44:36 -- nvmf/common.sh@437 -- # prepare_net_devs 00:10:44.067 00:44:36 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:10:44.067 00:44:36 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:10:44.067 00:44:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:44.067 00:44:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:44.067 00:44:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:44.328 00:44:36 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:10:44.328 00:44:36 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:10:44.328 00:44:36 -- nvmf/common.sh@285 -- # xtrace_disable 00:10:44.328 00:44:36 -- common/autotest_common.sh@10 -- # set +x 00:10:49.607 00:44:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:10:49.607 00:44:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:10:49.607 00:44:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:49.607 00:44:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:49.607 00:44:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:49.607 00:44:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:49.607 00:44:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:49.607 00:44:41 -- nvmf/common.sh@295 -- # net_devs=() 00:10:49.607 00:44:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:49.607 00:44:41 -- nvmf/common.sh@296 -- # e810=() 00:10:49.607 00:44:41 -- nvmf/common.sh@296 -- # local -ga e810 00:10:49.607 00:44:41 -- nvmf/common.sh@297 -- # x722=() 00:10:49.607 00:44:41 -- nvmf/common.sh@297 -- # local -ga x722 00:10:49.607 00:44:41 -- nvmf/common.sh@298 -- # mlx=() 00:10:49.607 00:44:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:10:49.607 00:44:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:49.607 00:44:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:49.607 00:44:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.607 00:44:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:10:49.607 Found 0000:27:00.0 (0x8086 - 0x159b) 00:10:49.607 00:44:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:49.607 00:44:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:10:49.607 Found 0000:27:00.1 (0x8086 - 0x159b) 00:10:49.607 00:44:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.607 00:44:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.607 00:44:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.607 00:44:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:10:49.607 Found net devices under 0000:27:00.0: cvl_0_0 00:10:49.607 00:44:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.607 00:44:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:49.607 00:44:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:49.607 00:44:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:49.607 00:44:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:10:49.607 Found net devices under 0000:27:00.1: cvl_0_1 00:10:49.607 00:44:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:10:49.607 00:44:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:10:49.607 00:44:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:10:49.607 00:44:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:10:49.607 00:44:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:49.607 00:44:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:49.607 00:44:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:49.607 00:44:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:49.607 00:44:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:49.607 00:44:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:49.607 00:44:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:49.607 00:44:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:49.607 00:44:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:49.607 00:44:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:49.607 00:44:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:49.607 00:44:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:49.607 00:44:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:49.607 00:44:42 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:49.607 00:44:42 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:49.607 00:44:42 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:49.607 00:44:42 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:49.607 00:44:42 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:49.607 00:44:42 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:49.607 00:44:42 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:49.607 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:49.607 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:49.607 00:10:49.607 --- 10.0.0.2 ping statistics --- 00:10:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.607 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:49.607 00:44:42 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:49.607 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:49.607 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.301 ms 00:10:49.607 00:10:49.607 --- 10.0.0.1 ping statistics --- 00:10:49.607 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:49.607 rtt min/avg/max/mdev = 0.301/0.301/0.301/0.000 ms 00:10:49.607 00:44:42 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:49.607 00:44:42 -- nvmf/common.sh@411 -- # return 0 00:10:49.607 00:44:42 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:10:49.607 00:44:42 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:49.607 00:44:42 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:10:49.607 00:44:42 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:10:49.607 00:44:42 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:49.607 00:44:42 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:10:49.607 00:44:42 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:10:49.607 00:44:42 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:49.607 00:44:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:10:49.607 00:44:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:49.607 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:10:49.607 ************************************ 00:10:49.607 START TEST nvmf_filesystem_no_in_capsule 00:10:49.607 ************************************ 00:10:49.607 00:44:42 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:10:49.607 00:44:42 -- target/filesystem.sh@47 -- # in_capsule=0 00:10:49.607 00:44:42 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:49.607 00:44:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:10:49.608 00:44:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:10:49.608 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:10:49.608 00:44:42 -- nvmf/common.sh@470 -- # nvmfpid=2628179 00:10:49.608 00:44:42 -- nvmf/common.sh@471 -- # waitforlisten 2628179 00:10:49.608 00:44:42 -- common/autotest_common.sh@817 -- # '[' -z 2628179 ']' 00:10:49.608 00:44:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:49.608 00:44:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:10:49.608 00:44:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:49.608 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:49.608 00:44:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:10:49.608 00:44:42 -- common/autotest_common.sh@10 -- # set +x 00:10:49.608 00:44:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:49.868 [2024-04-27 00:44:42.392984] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:10:49.868 [2024-04-27 00:44:42.393113] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:49.868 EAL: No free 2048 kB hugepages reported on node 1 00:10:49.868 [2024-04-27 00:44:42.534798] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:50.128 [2024-04-27 00:44:42.629994] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:50.128 [2024-04-27 00:44:42.630041] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:50.128 [2024-04-27 00:44:42.630053] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:50.128 [2024-04-27 00:44:42.630063] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:50.128 [2024-04-27 00:44:42.630070] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:50.128 [2024-04-27 00:44:42.630297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:10:50.128 [2024-04-27 00:44:42.630299] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:10:50.129 [2024-04-27 00:44:42.630332] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:10:50.129 [2024-04-27 00:44:42.630347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:10:50.697 00:44:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:10:50.697 00:44:43 -- common/autotest_common.sh@850 -- # return 0 00:10:50.697 00:44:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:10:50.697 00:44:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 00:44:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:50.697 00:44:43 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:50.697 00:44:43 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:50.697 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 [2024-04-27 00:44:43.129973] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:50.697 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.697 00:44:43 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:50.697 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 Malloc1 00:10:50.697 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.697 00:44:43 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:50.697 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.697 00:44:43 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:50.697 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.697 00:44:43 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:50.697 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.697 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.697 [2024-04-27 00:44:43.390617] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:50.956 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.956 00:44:43 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:50.956 00:44:43 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:10:50.956 00:44:43 -- common/autotest_common.sh@1365 -- # local bdev_info 00:10:50.956 00:44:43 -- common/autotest_common.sh@1366 -- # local bs 00:10:50.956 00:44:43 -- common/autotest_common.sh@1367 -- # local nb 00:10:50.956 00:44:43 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:50.956 00:44:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:10:50.956 00:44:43 -- common/autotest_common.sh@10 -- # set +x 00:10:50.956 00:44:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:10:50.956 00:44:43 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:10:50.956 { 00:10:50.956 "name": "Malloc1", 00:10:50.956 "aliases": [ 00:10:50.956 "78f21a3f-545d-4ac6-9f78-bde875046a5d" 00:10:50.956 ], 00:10:50.956 "product_name": "Malloc disk", 00:10:50.956 "block_size": 512, 00:10:50.956 "num_blocks": 1048576, 00:10:50.956 "uuid": "78f21a3f-545d-4ac6-9f78-bde875046a5d", 00:10:50.956 "assigned_rate_limits": { 00:10:50.956 "rw_ios_per_sec": 0, 00:10:50.956 "rw_mbytes_per_sec": 0, 00:10:50.956 "r_mbytes_per_sec": 0, 00:10:50.956 "w_mbytes_per_sec": 0 00:10:50.956 }, 00:10:50.956 "claimed": true, 00:10:50.956 "claim_type": "exclusive_write", 00:10:50.956 "zoned": false, 00:10:50.956 "supported_io_types": { 00:10:50.956 "read": true, 00:10:50.956 "write": true, 00:10:50.956 "unmap": true, 00:10:50.956 "write_zeroes": true, 00:10:50.956 "flush": true, 00:10:50.956 "reset": true, 00:10:50.956 "compare": false, 00:10:50.956 "compare_and_write": false, 00:10:50.956 "abort": true, 00:10:50.956 "nvme_admin": false, 00:10:50.956 "nvme_io": false 00:10:50.956 }, 00:10:50.956 "memory_domains": [ 00:10:50.956 { 00:10:50.956 "dma_device_id": "system", 00:10:50.956 "dma_device_type": 1 00:10:50.956 }, 00:10:50.956 { 00:10:50.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.956 "dma_device_type": 2 00:10:50.956 } 00:10:50.956 ], 00:10:50.956 "driver_specific": {} 00:10:50.956 } 00:10:50.956 ]' 00:10:50.956 00:44:43 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:10:50.956 00:44:43 -- common/autotest_common.sh@1369 -- # bs=512 00:10:50.956 00:44:43 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:10:50.956 00:44:43 -- common/autotest_common.sh@1370 -- # nb=1048576 00:10:50.956 00:44:43 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:10:50.956 00:44:43 -- common/autotest_common.sh@1374 -- # echo 512 00:10:50.956 00:44:43 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:50.956 00:44:43 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:52.338 00:44:44 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:52.338 00:44:44 -- common/autotest_common.sh@1184 -- # local i=0 00:10:52.338 00:44:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:10:52.338 00:44:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:10:52.338 00:44:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:10:54.872 00:44:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:10:54.872 00:44:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:10:54.872 00:44:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:10:54.872 00:44:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:10:54.872 00:44:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:10:54.872 00:44:46 -- common/autotest_common.sh@1194 -- # return 0 00:10:54.872 00:44:46 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:54.872 00:44:46 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:54.872 00:44:46 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:54.872 00:44:46 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:54.872 00:44:46 -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:54.872 00:44:46 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:54.872 00:44:46 -- setup/common.sh@80 -- # echo 536870912 00:10:54.872 00:44:46 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:54.872 00:44:46 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:54.872 00:44:46 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:54.872 00:44:46 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:54.872 00:44:47 -- target/filesystem.sh@69 -- # partprobe 00:10:54.872 00:44:47 -- target/filesystem.sh@70 -- # sleep 1 00:10:55.804 00:44:48 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:55.804 00:44:48 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:55.804 00:44:48 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:55.804 00:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:55.804 00:44:48 -- common/autotest_common.sh@10 -- # set +x 00:10:55.804 ************************************ 00:10:55.804 START TEST filesystem_ext4 00:10:55.804 ************************************ 00:10:55.804 00:44:48 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:55.804 00:44:48 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:55.804 00:44:48 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:55.804 00:44:48 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:55.804 00:44:48 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:10:55.804 00:44:48 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:55.804 00:44:48 -- common/autotest_common.sh@914 -- # local i=0 00:10:55.804 00:44:48 -- common/autotest_common.sh@915 -- # local force 00:10:55.804 00:44:48 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:10:55.804 00:44:48 -- common/autotest_common.sh@918 -- # force=-F 00:10:55.804 00:44:48 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:55.804 mke2fs 1.46.5 (30-Dec-2021) 00:10:55.804 Discarding device blocks: 0/522240 done 00:10:55.804 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:55.804 Filesystem UUID: 0ef253a1-1c31-4ce1-a753-f8d3f1fc4077 00:10:55.804 Superblock backups stored on blocks: 00:10:55.804 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:55.804 00:10:55.804 Allocating group tables: 0/64 done 00:10:55.804 Writing inode tables: 0/64 done 00:10:56.064 Creating journal (8192 blocks): done 00:10:56.890 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:10:56.890 00:10:56.890 00:44:49 -- common/autotest_common.sh@931 -- # return 0 00:10:56.890 00:44:49 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:57.149 00:44:49 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:57.149 00:44:49 -- target/filesystem.sh@25 -- # sync 00:10:57.149 00:44:49 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:57.149 00:44:49 -- target/filesystem.sh@27 -- # sync 00:10:57.149 00:44:49 -- target/filesystem.sh@29 -- # i=0 00:10:57.149 00:44:49 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:57.149 00:44:49 -- target/filesystem.sh@37 -- # kill -0 2628179 00:10:57.149 00:44:49 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:57.149 00:44:49 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:57.149 00:44:49 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:57.149 00:44:49 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:57.409 00:10:57.409 real 0m1.525s 00:10:57.409 user 0m0.017s 00:10:57.409 sys 0m0.048s 00:10:57.409 00:44:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:57.409 00:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:57.409 ************************************ 00:10:57.409 END TEST filesystem_ext4 00:10:57.409 ************************************ 00:10:57.409 00:44:49 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:57.409 00:44:49 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:57.409 00:44:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:57.409 00:44:49 -- common/autotest_common.sh@10 -- # set +x 00:10:57.409 ************************************ 00:10:57.409 START TEST filesystem_btrfs 00:10:57.409 ************************************ 00:10:57.409 00:44:49 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:57.409 00:44:49 -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:57.409 00:44:49 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:57.409 00:44:49 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:57.409 00:44:49 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:10:57.409 00:44:49 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:57.409 00:44:49 -- common/autotest_common.sh@914 -- # local i=0 00:10:57.409 00:44:49 -- common/autotest_common.sh@915 -- # local force 00:10:57.409 00:44:49 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:10:57.409 00:44:49 -- common/autotest_common.sh@920 -- # force=-f 00:10:57.409 00:44:49 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:57.672 btrfs-progs v6.6.2 00:10:57.672 See https://btrfs.readthedocs.io for more information. 00:10:57.672 00:10:57.672 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:57.672 NOTE: several default settings have changed in version 5.15, please make sure 00:10:57.672 this does not affect your deployments: 00:10:57.672 - DUP for metadata (-m dup) 00:10:57.672 - enabled no-holes (-O no-holes) 00:10:57.672 - enabled free-space-tree (-R free-space-tree) 00:10:57.672 00:10:57.672 Label: (null) 00:10:57.672 UUID: f36c7d32-97df-4e7f-81af-2c546a81bc05 00:10:57.672 Node size: 16384 00:10:57.672 Sector size: 4096 00:10:57.672 Filesystem size: 510.00MiB 00:10:57.672 Block group profiles: 00:10:57.672 Data: single 8.00MiB 00:10:57.672 Metadata: DUP 32.00MiB 00:10:57.672 System: DUP 8.00MiB 00:10:57.672 SSD detected: yes 00:10:57.672 Zoned device: no 00:10:57.672 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:57.672 Runtime features: free-space-tree 00:10:57.672 Checksum: crc32c 00:10:57.672 Number of devices: 1 00:10:57.672 Devices: 00:10:57.672 ID SIZE PATH 00:10:57.672 1 510.00MiB /dev/nvme0n1p1 00:10:57.672 00:10:57.672 00:44:50 -- common/autotest_common.sh@931 -- # return 0 00:10:57.672 00:44:50 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:58.297 00:44:50 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:58.297 00:44:50 -- target/filesystem.sh@25 -- # sync 00:10:58.297 00:44:50 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:58.297 00:44:50 -- target/filesystem.sh@27 -- # sync 00:10:58.297 00:44:50 -- target/filesystem.sh@29 -- # i=0 00:10:58.297 00:44:50 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:58.297 00:44:50 -- target/filesystem.sh@37 -- # kill -0 2628179 00:10:58.297 00:44:50 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:58.297 00:44:50 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:58.297 00:44:50 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:58.297 00:44:50 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:58.297 00:10:58.297 real 0m0.935s 00:10:58.297 user 0m0.015s 00:10:58.297 sys 0m0.055s 00:10:58.297 00:44:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:10:58.297 00:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:58.297 ************************************ 00:10:58.297 END TEST filesystem_btrfs 00:10:58.297 ************************************ 00:10:58.297 00:44:50 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:58.297 00:44:50 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:10:58.297 00:44:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:10:58.297 00:44:50 -- common/autotest_common.sh@10 -- # set +x 00:10:58.556 ************************************ 00:10:58.556 START TEST filesystem_xfs 00:10:58.556 ************************************ 00:10:58.556 00:44:51 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:10:58.556 00:44:51 -- target/filesystem.sh@18 -- # fstype=xfs 00:10:58.556 00:44:51 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:58.556 00:44:51 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:58.556 00:44:51 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:10:58.556 00:44:51 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:10:58.556 00:44:51 -- common/autotest_common.sh@914 -- # local i=0 00:10:58.556 00:44:51 -- common/autotest_common.sh@915 -- # local force 00:10:58.556 00:44:51 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:10:58.556 00:44:51 -- common/autotest_common.sh@920 -- # force=-f 00:10:58.556 00:44:51 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:58.556 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:58.556 = sectsz=512 attr=2, projid32bit=1 00:10:58.556 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:58.556 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:58.556 data = bsize=4096 blocks=130560, imaxpct=25 00:10:58.556 = sunit=0 swidth=0 blks 00:10:58.556 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:58.556 log =internal log bsize=4096 blocks=16384, version=2 00:10:58.556 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:58.556 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:59.933 Discarding blocks...Done. 00:10:59.933 00:44:52 -- common/autotest_common.sh@931 -- # return 0 00:10:59.933 00:44:52 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:01.834 00:44:54 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:01.834 00:44:54 -- target/filesystem.sh@25 -- # sync 00:11:01.834 00:44:54 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:01.834 00:44:54 -- target/filesystem.sh@27 -- # sync 00:11:01.834 00:44:54 -- target/filesystem.sh@29 -- # i=0 00:11:01.834 00:44:54 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:01.834 00:44:54 -- target/filesystem.sh@37 -- # kill -0 2628179 00:11:01.834 00:44:54 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:01.834 00:44:54 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:02.093 00:44:54 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:02.093 00:44:54 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:02.093 00:11:02.093 real 0m3.502s 00:11:02.093 user 0m0.017s 00:11:02.093 sys 0m0.051s 00:11:02.093 00:44:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:02.093 00:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:02.093 ************************************ 00:11:02.093 END TEST filesystem_xfs 00:11:02.093 ************************************ 00:11:02.093 00:44:54 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:02.093 00:44:54 -- target/filesystem.sh@93 -- # sync 00:11:02.093 00:44:54 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:02.093 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:02.093 00:44:54 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:02.093 00:44:54 -- common/autotest_common.sh@1205 -- # local i=0 00:11:02.093 00:44:54 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:02.093 00:44:54 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.093 00:44:54 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:02.093 00:44:54 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:02.093 00:44:54 -- common/autotest_common.sh@1217 -- # return 0 00:11:02.093 00:44:54 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:02.093 00:44:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:02.093 00:44:54 -- common/autotest_common.sh@10 -- # set +x 00:11:02.093 00:44:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:02.094 00:44:54 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:02.094 00:44:54 -- target/filesystem.sh@101 -- # killprocess 2628179 00:11:02.094 00:44:54 -- common/autotest_common.sh@936 -- # '[' -z 2628179 ']' 00:11:02.094 00:44:54 -- common/autotest_common.sh@940 -- # kill -0 2628179 00:11:02.094 00:44:54 -- common/autotest_common.sh@941 -- # uname 00:11:02.094 00:44:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:02.094 00:44:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2628179 00:11:02.353 00:44:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:02.353 00:44:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:02.353 00:44:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2628179' 00:11:02.353 killing process with pid 2628179 00:11:02.353 00:44:54 -- common/autotest_common.sh@955 -- # kill 2628179 00:11:02.353 00:44:54 -- common/autotest_common.sh@960 -- # wait 2628179 00:11:03.473 00:44:55 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:03.473 00:11:03.473 real 0m13.449s 00:11:03.473 user 0m51.851s 00:11:03.473 sys 0m1.181s 00:11:03.473 00:44:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:03.473 00:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:03.473 ************************************ 00:11:03.473 END TEST nvmf_filesystem_no_in_capsule 00:11:03.473 ************************************ 00:11:03.473 00:44:55 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:03.473 00:44:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:03.473 00:44:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:03.473 00:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:03.473 ************************************ 00:11:03.473 START TEST nvmf_filesystem_in_capsule 00:11:03.473 ************************************ 00:11:03.473 00:44:55 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:11:03.473 00:44:55 -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:03.473 00:44:55 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:03.473 00:44:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:03.473 00:44:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:03.473 00:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:03.473 00:44:55 -- nvmf/common.sh@470 -- # nvmfpid=2631104 00:11:03.473 00:44:55 -- nvmf/common.sh@471 -- # waitforlisten 2631104 00:11:03.473 00:44:55 -- common/autotest_common.sh@817 -- # '[' -z 2631104 ']' 00:11:03.473 00:44:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:03.473 00:44:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:03.473 00:44:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:03.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:03.473 00:44:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:03.473 00:44:55 -- common/autotest_common.sh@10 -- # set +x 00:11:03.473 00:44:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:03.473 [2024-04-27 00:44:55.951653] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:03.473 [2024-04-27 00:44:55.951749] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:03.473 EAL: No free 2048 kB hugepages reported on node 1 00:11:03.473 [2024-04-27 00:44:56.073321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:03.473 [2024-04-27 00:44:56.167058] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:03.473 [2024-04-27 00:44:56.167096] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:03.473 [2024-04-27 00:44:56.167107] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:03.473 [2024-04-27 00:44:56.167116] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:03.473 [2024-04-27 00:44:56.167122] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:03.473 [2024-04-27 00:44:56.167261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:03.473 [2024-04-27 00:44:56.167354] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.473 [2024-04-27 00:44:56.167323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:03.473 [2024-04-27 00:44:56.167365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:04.040 00:44:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:04.040 00:44:56 -- common/autotest_common.sh@850 -- # return 0 00:11:04.040 00:44:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:04.040 00:44:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:04.040 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.040 00:44:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:04.040 00:44:56 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:04.040 00:44:56 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:04.040 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.040 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.040 [2024-04-27 00:44:56.703027] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:04.040 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.040 00:44:56 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:04.040 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.040 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 Malloc1 00:11:04.298 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.298 00:44:56 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:04.298 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.298 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.298 00:44:56 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:04.298 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.298 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.298 00:44:56 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:04.298 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.298 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 [2024-04-27 00:44:56.962141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:04.298 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.298 00:44:56 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:04.298 00:44:56 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:11:04.298 00:44:56 -- common/autotest_common.sh@1365 -- # local bdev_info 00:11:04.298 00:44:56 -- common/autotest_common.sh@1366 -- # local bs 00:11:04.298 00:44:56 -- common/autotest_common.sh@1367 -- # local nb 00:11:04.298 00:44:56 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:04.298 00:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:04.298 00:44:56 -- common/autotest_common.sh@10 -- # set +x 00:11:04.298 00:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:04.298 00:44:56 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:11:04.298 { 00:11:04.298 "name": "Malloc1", 00:11:04.298 "aliases": [ 00:11:04.298 "2cfb065a-3846-4bea-a48f-868e80b4eef1" 00:11:04.298 ], 00:11:04.298 "product_name": "Malloc disk", 00:11:04.298 "block_size": 512, 00:11:04.298 "num_blocks": 1048576, 00:11:04.298 "uuid": "2cfb065a-3846-4bea-a48f-868e80b4eef1", 00:11:04.298 "assigned_rate_limits": { 00:11:04.298 "rw_ios_per_sec": 0, 00:11:04.298 "rw_mbytes_per_sec": 0, 00:11:04.298 "r_mbytes_per_sec": 0, 00:11:04.298 "w_mbytes_per_sec": 0 00:11:04.298 }, 00:11:04.298 "claimed": true, 00:11:04.298 "claim_type": "exclusive_write", 00:11:04.298 "zoned": false, 00:11:04.298 "supported_io_types": { 00:11:04.298 "read": true, 00:11:04.298 "write": true, 00:11:04.298 "unmap": true, 00:11:04.298 "write_zeroes": true, 00:11:04.298 "flush": true, 00:11:04.298 "reset": true, 00:11:04.298 "compare": false, 00:11:04.298 "compare_and_write": false, 00:11:04.298 "abort": true, 00:11:04.298 "nvme_admin": false, 00:11:04.298 "nvme_io": false 00:11:04.298 }, 00:11:04.298 "memory_domains": [ 00:11:04.298 { 00:11:04.298 "dma_device_id": "system", 00:11:04.298 "dma_device_type": 1 00:11:04.298 }, 00:11:04.298 { 00:11:04.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.298 "dma_device_type": 2 00:11:04.298 } 00:11:04.298 ], 00:11:04.298 "driver_specific": {} 00:11:04.298 } 00:11:04.298 ]' 00:11:04.298 00:44:56 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:11:04.555 00:44:57 -- common/autotest_common.sh@1369 -- # bs=512 00:11:04.555 00:44:57 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:11:04.555 00:44:57 -- common/autotest_common.sh@1370 -- # nb=1048576 00:11:04.555 00:44:57 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:11:04.555 00:44:57 -- common/autotest_common.sh@1374 -- # echo 512 00:11:04.555 00:44:57 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:04.555 00:44:57 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:05.933 00:44:58 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:05.933 00:44:58 -- common/autotest_common.sh@1184 -- # local i=0 00:11:05.933 00:44:58 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:11:05.933 00:44:58 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:11:05.933 00:44:58 -- common/autotest_common.sh@1191 -- # sleep 2 00:11:07.842 00:45:00 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:11:07.842 00:45:00 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:11:07.842 00:45:00 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:11:07.842 00:45:00 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:11:07.842 00:45:00 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:11:07.842 00:45:00 -- common/autotest_common.sh@1194 -- # return 0 00:11:07.842 00:45:00 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:07.842 00:45:00 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:08.102 00:45:00 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:08.102 00:45:00 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:08.102 00:45:00 -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:08.102 00:45:00 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:08.102 00:45:00 -- setup/common.sh@80 -- # echo 536870912 00:11:08.102 00:45:00 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:08.102 00:45:00 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:08.102 00:45:00 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:08.102 00:45:00 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:08.362 00:45:00 -- target/filesystem.sh@69 -- # partprobe 00:11:08.621 00:45:01 -- target/filesystem.sh@70 -- # sleep 1 00:11:09.556 00:45:02 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:09.556 00:45:02 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:09.556 00:45:02 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:09.556 00:45:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:09.556 00:45:02 -- common/autotest_common.sh@10 -- # set +x 00:11:09.817 ************************************ 00:11:09.817 START TEST filesystem_in_capsule_ext4 00:11:09.817 ************************************ 00:11:09.817 00:45:02 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:09.817 00:45:02 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:09.817 00:45:02 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:09.817 00:45:02 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:09.817 00:45:02 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:11:09.817 00:45:02 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:09.817 00:45:02 -- common/autotest_common.sh@914 -- # local i=0 00:11:09.817 00:45:02 -- common/autotest_common.sh@915 -- # local force 00:11:09.817 00:45:02 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:11:09.817 00:45:02 -- common/autotest_common.sh@918 -- # force=-F 00:11:09.817 00:45:02 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:09.817 mke2fs 1.46.5 (30-Dec-2021) 00:11:09.817 Discarding device blocks: 0/522240 done 00:11:09.817 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:09.817 Filesystem UUID: ccdae625-ae47-4236-957e-fe8e349917c8 00:11:09.817 Superblock backups stored on blocks: 00:11:09.817 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:09.817 00:11:09.817 Allocating group tables: 0/64 done 00:11:09.817 Writing inode tables: 0/64 done 00:11:13.117 Creating journal (8192 blocks): done 00:11:13.117 Writing superblocks and filesystem accounting information: 0/64 done 00:11:13.117 00:11:13.117 00:45:05 -- common/autotest_common.sh@931 -- # return 0 00:11:13.117 00:45:05 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:13.377 00:45:05 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:13.377 00:45:05 -- target/filesystem.sh@25 -- # sync 00:11:13.377 00:45:05 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:13.377 00:45:05 -- target/filesystem.sh@27 -- # sync 00:11:13.377 00:45:05 -- target/filesystem.sh@29 -- # i=0 00:11:13.377 00:45:05 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:13.377 00:45:05 -- target/filesystem.sh@37 -- # kill -0 2631104 00:11:13.377 00:45:05 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:13.377 00:45:05 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:13.377 00:45:05 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:13.377 00:45:05 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:13.377 00:11:13.377 real 0m3.546s 00:11:13.377 user 0m0.017s 00:11:13.377 sys 0m0.050s 00:11:13.377 00:45:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:13.377 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:11:13.377 ************************************ 00:11:13.377 END TEST filesystem_in_capsule_ext4 00:11:13.377 ************************************ 00:11:13.377 00:45:05 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:13.377 00:45:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:13.377 00:45:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:13.377 00:45:05 -- common/autotest_common.sh@10 -- # set +x 00:11:13.377 ************************************ 00:11:13.377 START TEST filesystem_in_capsule_btrfs 00:11:13.377 ************************************ 00:11:13.377 00:45:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:13.377 00:45:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:13.377 00:45:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:13.377 00:45:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:13.377 00:45:06 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:11:13.377 00:45:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:13.377 00:45:06 -- common/autotest_common.sh@914 -- # local i=0 00:11:13.377 00:45:06 -- common/autotest_common.sh@915 -- # local force 00:11:13.377 00:45:06 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:11:13.377 00:45:06 -- common/autotest_common.sh@920 -- # force=-f 00:11:13.377 00:45:06 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:13.945 btrfs-progs v6.6.2 00:11:13.945 See https://btrfs.readthedocs.io for more information. 00:11:13.945 00:11:13.945 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:13.945 NOTE: several default settings have changed in version 5.15, please make sure 00:11:13.945 this does not affect your deployments: 00:11:13.945 - DUP for metadata (-m dup) 00:11:13.945 - enabled no-holes (-O no-holes) 00:11:13.945 - enabled free-space-tree (-R free-space-tree) 00:11:13.945 00:11:13.945 Label: (null) 00:11:13.945 UUID: df51fe61-5c65-41a6-9fe9-179d345ac8f5 00:11:13.945 Node size: 16384 00:11:13.946 Sector size: 4096 00:11:13.946 Filesystem size: 510.00MiB 00:11:13.946 Block group profiles: 00:11:13.946 Data: single 8.00MiB 00:11:13.946 Metadata: DUP 32.00MiB 00:11:13.946 System: DUP 8.00MiB 00:11:13.946 SSD detected: yes 00:11:13.946 Zoned device: no 00:11:13.946 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:13.946 Runtime features: free-space-tree 00:11:13.946 Checksum: crc32c 00:11:13.946 Number of devices: 1 00:11:13.946 Devices: 00:11:13.946 ID SIZE PATH 00:11:13.946 1 510.00MiB /dev/nvme0n1p1 00:11:13.946 00:11:13.946 00:45:06 -- common/autotest_common.sh@931 -- # return 0 00:11:13.946 00:45:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:14.884 00:45:07 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:14.884 00:45:07 -- target/filesystem.sh@25 -- # sync 00:11:14.884 00:45:07 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:14.884 00:45:07 -- target/filesystem.sh@27 -- # sync 00:11:14.884 00:45:07 -- target/filesystem.sh@29 -- # i=0 00:11:14.884 00:45:07 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:14.884 00:45:07 -- target/filesystem.sh@37 -- # kill -0 2631104 00:11:14.884 00:45:07 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:14.884 00:45:07 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:14.884 00:45:07 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:14.884 00:45:07 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:14.884 00:11:14.884 real 0m1.211s 00:11:14.884 user 0m0.017s 00:11:14.884 sys 0m0.062s 00:11:14.884 00:45:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:14.884 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 ************************************ 00:11:14.884 END TEST filesystem_in_capsule_btrfs 00:11:14.884 ************************************ 00:11:14.884 00:45:07 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:14.884 00:45:07 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:11:14.884 00:45:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:14.884 00:45:07 -- common/autotest_common.sh@10 -- # set +x 00:11:14.884 ************************************ 00:11:14.884 START TEST filesystem_in_capsule_xfs 00:11:14.884 ************************************ 00:11:14.884 00:45:07 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:11:14.884 00:45:07 -- target/filesystem.sh@18 -- # fstype=xfs 00:11:14.884 00:45:07 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:14.884 00:45:07 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:14.884 00:45:07 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:11:14.884 00:45:07 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:11:14.884 00:45:07 -- common/autotest_common.sh@914 -- # local i=0 00:11:14.884 00:45:07 -- common/autotest_common.sh@915 -- # local force 00:11:14.884 00:45:07 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:11:14.884 00:45:07 -- common/autotest_common.sh@920 -- # force=-f 00:11:14.884 00:45:07 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:14.884 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:14.884 = sectsz=512 attr=2, projid32bit=1 00:11:14.884 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:14.884 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:14.884 data = bsize=4096 blocks=130560, imaxpct=25 00:11:14.884 = sunit=0 swidth=0 blks 00:11:14.884 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:14.884 log =internal log bsize=4096 blocks=16384, version=2 00:11:14.884 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:14.884 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:15.817 Discarding blocks...Done. 00:11:15.817 00:45:08 -- common/autotest_common.sh@931 -- # return 0 00:11:15.817 00:45:08 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:18.352 00:45:10 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:18.352 00:45:10 -- target/filesystem.sh@25 -- # sync 00:11:18.352 00:45:10 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:18.352 00:45:10 -- target/filesystem.sh@27 -- # sync 00:11:18.352 00:45:10 -- target/filesystem.sh@29 -- # i=0 00:11:18.352 00:45:10 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:18.352 00:45:10 -- target/filesystem.sh@37 -- # kill -0 2631104 00:11:18.352 00:45:10 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:18.352 00:45:10 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:18.352 00:45:10 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:18.353 00:45:10 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:18.353 00:11:18.353 real 0m3.280s 00:11:18.353 user 0m0.014s 00:11:18.353 sys 0m0.051s 00:11:18.353 00:45:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:18.353 00:45:10 -- common/autotest_common.sh@10 -- # set +x 00:11:18.353 ************************************ 00:11:18.353 END TEST filesystem_in_capsule_xfs 00:11:18.353 ************************************ 00:11:18.353 00:45:10 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:18.353 00:45:10 -- target/filesystem.sh@93 -- # sync 00:11:18.353 00:45:10 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:18.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.612 00:45:11 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:18.612 00:45:11 -- common/autotest_common.sh@1205 -- # local i=0 00:11:18.612 00:45:11 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:11:18.612 00:45:11 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.612 00:45:11 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:18.612 00:45:11 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:11:18.612 00:45:11 -- common/autotest_common.sh@1217 -- # return 0 00:11:18.612 00:45:11 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:18.612 00:45:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:18.612 00:45:11 -- common/autotest_common.sh@10 -- # set +x 00:11:18.612 00:45:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:18.612 00:45:11 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:18.612 00:45:11 -- target/filesystem.sh@101 -- # killprocess 2631104 00:11:18.612 00:45:11 -- common/autotest_common.sh@936 -- # '[' -z 2631104 ']' 00:11:18.612 00:45:11 -- common/autotest_common.sh@940 -- # kill -0 2631104 00:11:18.612 00:45:11 -- common/autotest_common.sh@941 -- # uname 00:11:18.612 00:45:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:18.612 00:45:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2631104 00:11:18.612 00:45:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:18.612 00:45:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:18.612 00:45:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2631104' 00:11:18.612 killing process with pid 2631104 00:11:18.612 00:45:11 -- common/autotest_common.sh@955 -- # kill 2631104 00:11:18.612 00:45:11 -- common/autotest_common.sh@960 -- # wait 2631104 00:11:19.550 00:45:12 -- target/filesystem.sh@102 -- # nvmfpid= 00:11:19.550 00:11:19.550 real 0m16.291s 00:11:19.550 user 1m3.289s 00:11:19.550 sys 0m1.224s 00:11:19.550 00:45:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:19.550 00:45:12 -- common/autotest_common.sh@10 -- # set +x 00:11:19.550 ************************************ 00:11:19.550 END TEST nvmf_filesystem_in_capsule 00:11:19.550 ************************************ 00:11:19.550 00:45:12 -- target/filesystem.sh@108 -- # nvmftestfini 00:11:19.550 00:45:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:19.550 00:45:12 -- nvmf/common.sh@117 -- # sync 00:11:19.550 00:45:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:19.550 00:45:12 -- nvmf/common.sh@120 -- # set +e 00:11:19.550 00:45:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:19.551 00:45:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:19.551 rmmod nvme_tcp 00:11:19.551 rmmod nvme_fabrics 00:11:19.551 rmmod nvme_keyring 00:11:19.551 00:45:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:19.551 00:45:12 -- nvmf/common.sh@124 -- # set -e 00:11:19.551 00:45:12 -- nvmf/common.sh@125 -- # return 0 00:11:19.551 00:45:12 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:11:19.551 00:45:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:19.551 00:45:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:19.551 00:45:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:19.551 00:45:12 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:19.551 00:45:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:19.551 00:45:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.859 00:45:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:19.859 00:45:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:21.764 00:45:14 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:21.764 00:11:21.764 real 0m37.750s 00:11:21.764 user 1m56.830s 00:11:21.764 sys 0m6.635s 00:11:21.764 00:45:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:21.764 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:11:21.764 ************************************ 00:11:21.764 END TEST nvmf_filesystem 00:11:21.764 ************************************ 00:11:21.764 00:45:14 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:21.764 00:45:14 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:21.764 00:45:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:21.764 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:11:21.764 ************************************ 00:11:21.764 START TEST nvmf_discovery 00:11:21.764 ************************************ 00:11:21.764 00:45:14 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:22.023 * Looking for test storage... 00:11:22.023 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:22.023 00:45:14 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.023 00:45:14 -- nvmf/common.sh@7 -- # uname -s 00:11:22.023 00:45:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.023 00:45:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.023 00:45:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.023 00:45:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.023 00:45:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.023 00:45:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.023 00:45:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.023 00:45:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.023 00:45:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.023 00:45:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.023 00:45:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:11:22.023 00:45:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:11:22.023 00:45:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.023 00:45:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.023 00:45:14 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:22.023 00:45:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.023 00:45:14 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:22.023 00:45:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.024 00:45:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.024 00:45:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.024 00:45:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 00:45:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 00:45:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 00:45:14 -- paths/export.sh@5 -- # export PATH 00:11:22.024 00:45:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.024 00:45:14 -- nvmf/common.sh@47 -- # : 0 00:11:22.024 00:45:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:22.024 00:45:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:22.024 00:45:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.024 00:45:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.024 00:45:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.024 00:45:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:22.024 00:45:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:22.024 00:45:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:22.024 00:45:14 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:22.024 00:45:14 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:22.024 00:45:14 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:22.024 00:45:14 -- target/discovery.sh@15 -- # hash nvme 00:11:22.024 00:45:14 -- target/discovery.sh@20 -- # nvmftestinit 00:11:22.024 00:45:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:22.024 00:45:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.024 00:45:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:22.024 00:45:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:22.024 00:45:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:22.024 00:45:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.024 00:45:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:22.024 00:45:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.024 00:45:14 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:22.024 00:45:14 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:22.024 00:45:14 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:22.024 00:45:14 -- common/autotest_common.sh@10 -- # set +x 00:11:27.304 00:45:19 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:27.304 00:45:19 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:27.304 00:45:19 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:27.304 00:45:19 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:27.304 00:45:19 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:27.304 00:45:19 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:27.304 00:45:19 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:27.304 00:45:19 -- nvmf/common.sh@295 -- # net_devs=() 00:11:27.304 00:45:19 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:27.304 00:45:19 -- nvmf/common.sh@296 -- # e810=() 00:11:27.304 00:45:19 -- nvmf/common.sh@296 -- # local -ga e810 00:11:27.304 00:45:19 -- nvmf/common.sh@297 -- # x722=() 00:11:27.304 00:45:19 -- nvmf/common.sh@297 -- # local -ga x722 00:11:27.304 00:45:19 -- nvmf/common.sh@298 -- # mlx=() 00:11:27.304 00:45:19 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:27.304 00:45:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:27.304 00:45:19 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:27.304 00:45:19 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:27.304 00:45:19 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:27.304 00:45:19 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.305 00:45:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:27.305 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:27.305 00:45:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:27.305 00:45:19 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:27.305 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:27.305 00:45:19 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.305 00:45:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.305 00:45:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.305 00:45:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:27.305 Found net devices under 0000:27:00.0: cvl_0_0 00:11:27.305 00:45:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.305 00:45:19 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:27.305 00:45:19 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:27.305 00:45:19 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:27.305 00:45:19 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:27.305 Found net devices under 0000:27:00.1: cvl_0_1 00:11:27.305 00:45:19 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:27.305 00:45:19 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:27.305 00:45:19 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:27.305 00:45:19 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:27.305 00:45:19 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:27.305 00:45:19 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:27.305 00:45:19 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:27.305 00:45:19 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:27.305 00:45:19 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:27.305 00:45:19 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:27.305 00:45:19 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:27.305 00:45:19 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:27.305 00:45:19 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:27.305 00:45:19 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:27.305 00:45:19 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:27.305 00:45:19 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:27.305 00:45:19 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:27.305 00:45:19 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:27.305 00:45:19 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:27.305 00:45:19 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:27.305 00:45:19 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:27.305 00:45:19 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:27.305 00:45:19 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:27.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:27.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.476 ms 00:11:27.305 00:11:27.305 --- 10.0.0.2 ping statistics --- 00:11:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.305 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:11:27.305 00:45:19 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:27.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:27.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:11:27.305 00:11:27.305 --- 10.0.0.1 ping statistics --- 00:11:27.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:27.305 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:11:27.305 00:45:19 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:27.305 00:45:19 -- nvmf/common.sh@411 -- # return 0 00:11:27.305 00:45:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:27.305 00:45:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:27.305 00:45:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:27.305 00:45:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:27.305 00:45:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:27.305 00:45:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:27.305 00:45:19 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:11:27.305 00:45:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:27.305 00:45:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:27.305 00:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:27.305 00:45:19 -- nvmf/common.sh@470 -- # nvmfpid=2638923 00:11:27.305 00:45:19 -- nvmf/common.sh@471 -- # waitforlisten 2638923 00:11:27.305 00:45:19 -- common/autotest_common.sh@817 -- # '[' -z 2638923 ']' 00:11:27.305 00:45:19 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:27.305 00:45:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:27.305 00:45:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:27.305 00:45:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:27.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:27.305 00:45:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:27.305 00:45:19 -- common/autotest_common.sh@10 -- # set +x 00:11:27.566 [2024-04-27 00:45:20.047514] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:27.566 [2024-04-27 00:45:20.047618] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:27.566 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.566 [2024-04-27 00:45:20.176146] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:27.827 [2024-04-27 00:45:20.271290] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:27.827 [2024-04-27 00:45:20.271331] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:27.827 [2024-04-27 00:45:20.271343] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:27.827 [2024-04-27 00:45:20.271353] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:27.827 [2024-04-27 00:45:20.271360] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:27.827 [2024-04-27 00:45:20.271439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:27.827 [2024-04-27 00:45:20.271447] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:27.827 [2024-04-27 00:45:20.271507] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:27.827 [2024-04-27 00:45:20.271491] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:28.088 00:45:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:28.088 00:45:20 -- common/autotest_common.sh@850 -- # return 0 00:11:28.088 00:45:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:28.088 00:45:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:28.088 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.349 00:45:20 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 [2024-04-27 00:45:20.814330] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@26 -- # seq 1 4 00:11:28.349 00:45:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.349 00:45:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 Null1 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 [2024-04-27 00:45:20.862550] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.349 00:45:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 Null2 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.349 00:45:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 Null3 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:11:28.349 00:45:20 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 Null4 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:28.349 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.349 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.349 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.349 00:45:20 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:28.350 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.350 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.350 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.350 00:45:20 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.350 00:45:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.350 00:45:20 -- common/autotest_common.sh@10 -- # set +x 00:11:28.350 00:45:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.350 00:45:20 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 4420 00:11:28.611 00:11:28.611 Discovery Log Number of Records 6, Generation counter 6 00:11:28.611 =====Discovery Log Entry 0====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: current discovery subsystem 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4420 00:11:28.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: explicit discovery connections, duplicate discovery information 00:11:28.611 sectype: none 00:11:28.611 =====Discovery Log Entry 1====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: nvme subsystem 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4420 00:11:28.611 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: none 00:11:28.611 sectype: none 00:11:28.611 =====Discovery Log Entry 2====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: nvme subsystem 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4420 00:11:28.611 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: none 00:11:28.611 sectype: none 00:11:28.611 =====Discovery Log Entry 3====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: nvme subsystem 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4420 00:11:28.611 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: none 00:11:28.611 sectype: none 00:11:28.611 =====Discovery Log Entry 4====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: nvme subsystem 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4420 00:11:28.611 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: none 00:11:28.611 sectype: none 00:11:28.611 =====Discovery Log Entry 5====== 00:11:28.611 trtype: tcp 00:11:28.611 adrfam: ipv4 00:11:28.611 subtype: discovery subsystem referral 00:11:28.611 treq: not required 00:11:28.611 portid: 0 00:11:28.611 trsvcid: 4430 00:11:28.611 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:28.611 traddr: 10.0.0.2 00:11:28.611 eflags: none 00:11:28.611 sectype: none 00:11:28.611 00:45:21 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:28.611 Perform nvmf subsystem discovery via RPC 00:11:28.611 00:45:21 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:11:28.611 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.611 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.611 [2024-04-27 00:45:21.090653] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:11:28.611 [ 00:11:28.611 { 00:11:28.611 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:28.611 "subtype": "Discovery", 00:11:28.611 "listen_addresses": [ 00:11:28.611 { 00:11:28.611 "transport": "TCP", 00:11:28.611 "trtype": "TCP", 00:11:28.611 "adrfam": "IPv4", 00:11:28.611 "traddr": "10.0.0.2", 00:11:28.611 "trsvcid": "4420" 00:11:28.611 } 00:11:28.611 ], 00:11:28.611 "allow_any_host": true, 00:11:28.611 "hosts": [] 00:11:28.611 }, 00:11:28.611 { 00:11:28.611 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:28.611 "subtype": "NVMe", 00:11:28.611 "listen_addresses": [ 00:11:28.611 { 00:11:28.611 "transport": "TCP", 00:11:28.611 "trtype": "TCP", 00:11:28.611 "adrfam": "IPv4", 00:11:28.611 "traddr": "10.0.0.2", 00:11:28.611 "trsvcid": "4420" 00:11:28.611 } 00:11:28.611 ], 00:11:28.611 "allow_any_host": true, 00:11:28.611 "hosts": [], 00:11:28.611 "serial_number": "SPDK00000000000001", 00:11:28.611 "model_number": "SPDK bdev Controller", 00:11:28.611 "max_namespaces": 32, 00:11:28.611 "min_cntlid": 1, 00:11:28.611 "max_cntlid": 65519, 00:11:28.611 "namespaces": [ 00:11:28.611 { 00:11:28.611 "nsid": 1, 00:11:28.611 "bdev_name": "Null1", 00:11:28.611 "name": "Null1", 00:11:28.611 "nguid": "C941A0CFEF9942B8BD5B496CD4B18A31", 00:11:28.611 "uuid": "c941a0cf-ef99-42b8-bd5b-496cd4b18a31" 00:11:28.611 } 00:11:28.611 ] 00:11:28.611 }, 00:11:28.611 { 00:11:28.611 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:28.611 "subtype": "NVMe", 00:11:28.611 "listen_addresses": [ 00:11:28.611 { 00:11:28.611 "transport": "TCP", 00:11:28.611 "trtype": "TCP", 00:11:28.611 "adrfam": "IPv4", 00:11:28.611 "traddr": "10.0.0.2", 00:11:28.611 "trsvcid": "4420" 00:11:28.611 } 00:11:28.611 ], 00:11:28.611 "allow_any_host": true, 00:11:28.611 "hosts": [], 00:11:28.611 "serial_number": "SPDK00000000000002", 00:11:28.611 "model_number": "SPDK bdev Controller", 00:11:28.611 "max_namespaces": 32, 00:11:28.611 "min_cntlid": 1, 00:11:28.611 "max_cntlid": 65519, 00:11:28.611 "namespaces": [ 00:11:28.611 { 00:11:28.611 "nsid": 1, 00:11:28.611 "bdev_name": "Null2", 00:11:28.611 "name": "Null2", 00:11:28.611 "nguid": "A28A1B569CD644A4BCF50B749BA9B7C8", 00:11:28.611 "uuid": "a28a1b56-9cd6-44a4-bcf5-0b749ba9b7c8" 00:11:28.611 } 00:11:28.611 ] 00:11:28.611 }, 00:11:28.611 { 00:11:28.611 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:28.611 "subtype": "NVMe", 00:11:28.611 "listen_addresses": [ 00:11:28.611 { 00:11:28.611 "transport": "TCP", 00:11:28.611 "trtype": "TCP", 00:11:28.611 "adrfam": "IPv4", 00:11:28.611 "traddr": "10.0.0.2", 00:11:28.611 "trsvcid": "4420" 00:11:28.611 } 00:11:28.611 ], 00:11:28.611 "allow_any_host": true, 00:11:28.611 "hosts": [], 00:11:28.611 "serial_number": "SPDK00000000000003", 00:11:28.611 "model_number": "SPDK bdev Controller", 00:11:28.611 "max_namespaces": 32, 00:11:28.611 "min_cntlid": 1, 00:11:28.611 "max_cntlid": 65519, 00:11:28.611 "namespaces": [ 00:11:28.611 { 00:11:28.611 "nsid": 1, 00:11:28.611 "bdev_name": "Null3", 00:11:28.611 "name": "Null3", 00:11:28.611 "nguid": "9DC3F44C11AA458689241AD456F86763", 00:11:28.611 "uuid": "9dc3f44c-11aa-4586-8924-1ad456f86763" 00:11:28.611 } 00:11:28.611 ] 00:11:28.611 }, 00:11:28.611 { 00:11:28.611 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:28.611 "subtype": "NVMe", 00:11:28.611 "listen_addresses": [ 00:11:28.611 { 00:11:28.611 "transport": "TCP", 00:11:28.611 "trtype": "TCP", 00:11:28.611 "adrfam": "IPv4", 00:11:28.611 "traddr": "10.0.0.2", 00:11:28.611 "trsvcid": "4420" 00:11:28.611 } 00:11:28.611 ], 00:11:28.611 "allow_any_host": true, 00:11:28.611 "hosts": [], 00:11:28.611 "serial_number": "SPDK00000000000004", 00:11:28.611 "model_number": "SPDK bdev Controller", 00:11:28.611 "max_namespaces": 32, 00:11:28.611 "min_cntlid": 1, 00:11:28.611 "max_cntlid": 65519, 00:11:28.611 "namespaces": [ 00:11:28.611 { 00:11:28.611 "nsid": 1, 00:11:28.611 "bdev_name": "Null4", 00:11:28.611 "name": "Null4", 00:11:28.611 "nguid": "A1C8CE6750214899BFE4EC1F5ACD2A9B", 00:11:28.611 "uuid": "a1c8ce67-5021-4899-bfe4-ec1f5acd2a9b" 00:11:28.611 } 00:11:28.611 ] 00:11:28.611 } 00:11:28.611 ] 00:11:28.611 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.611 00:45:21 -- target/discovery.sh@42 -- # seq 1 4 00:11:28.611 00:45:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.611 00:45:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.612 00:45:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.612 00:45:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:11:28.612 00:45:21 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:11:28.612 00:45:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:28.612 00:45:21 -- common/autotest_common.sh@10 -- # set +x 00:11:28.612 00:45:21 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:11:28.612 00:45:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:28.612 00:45:21 -- target/discovery.sh@49 -- # check_bdevs= 00:11:28.612 00:45:21 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:11:28.612 00:45:21 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:11:28.612 00:45:21 -- target/discovery.sh@57 -- # nvmftestfini 00:11:28.612 00:45:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:28.612 00:45:21 -- nvmf/common.sh@117 -- # sync 00:11:28.612 00:45:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:28.612 00:45:21 -- nvmf/common.sh@120 -- # set +e 00:11:28.612 00:45:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:28.612 00:45:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:28.612 rmmod nvme_tcp 00:11:28.612 rmmod nvme_fabrics 00:11:28.612 rmmod nvme_keyring 00:11:28.612 00:45:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:28.612 00:45:21 -- nvmf/common.sh@124 -- # set -e 00:11:28.612 00:45:21 -- nvmf/common.sh@125 -- # return 0 00:11:28.612 00:45:21 -- nvmf/common.sh@478 -- # '[' -n 2638923 ']' 00:11:28.612 00:45:21 -- nvmf/common.sh@479 -- # killprocess 2638923 00:11:28.612 00:45:21 -- common/autotest_common.sh@936 -- # '[' -z 2638923 ']' 00:11:28.612 00:45:21 -- common/autotest_common.sh@940 -- # kill -0 2638923 00:11:28.873 00:45:21 -- common/autotest_common.sh@941 -- # uname 00:11:28.873 00:45:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:28.873 00:45:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2638923 00:11:28.873 00:45:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:28.873 00:45:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:28.873 00:45:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2638923' 00:11:28.873 killing process with pid 2638923 00:11:28.873 00:45:21 -- common/autotest_common.sh@955 -- # kill 2638923 00:11:28.873 [2024-04-27 00:45:21.352538] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:11:28.873 00:45:21 -- common/autotest_common.sh@960 -- # wait 2638923 00:11:29.444 00:45:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:29.444 00:45:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:29.444 00:45:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:29.444 00:45:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:29.444 00:45:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:29.444 00:45:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:29.444 00:45:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:29.444 00:45:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.355 00:45:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:31.355 00:11:31.355 real 0m9.491s 00:11:31.355 user 0m7.408s 00:11:31.355 sys 0m4.450s 00:11:31.355 00:45:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:31.355 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:11:31.355 ************************************ 00:11:31.355 END TEST nvmf_discovery 00:11:31.355 ************************************ 00:11:31.355 00:45:23 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.355 00:45:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:31.355 00:45:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:31.355 00:45:23 -- common/autotest_common.sh@10 -- # set +x 00:11:31.616 ************************************ 00:11:31.616 START TEST nvmf_referrals 00:11:31.616 ************************************ 00:11:31.616 00:45:24 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:31.616 * Looking for test storage... 00:11:31.616 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:31.616 00:45:24 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:31.616 00:45:24 -- nvmf/common.sh@7 -- # uname -s 00:11:31.616 00:45:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:31.616 00:45:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:31.616 00:45:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:31.616 00:45:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:31.616 00:45:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:31.616 00:45:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:31.616 00:45:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:31.616 00:45:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:31.616 00:45:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:31.616 00:45:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:31.616 00:45:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:11:31.616 00:45:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:11:31.616 00:45:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:31.616 00:45:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:31.616 00:45:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:31.616 00:45:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:31.616 00:45:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:31.616 00:45:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:31.616 00:45:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:31.616 00:45:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:31.616 00:45:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.616 00:45:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.616 00:45:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.616 00:45:24 -- paths/export.sh@5 -- # export PATH 00:11:31.616 00:45:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:31.616 00:45:24 -- nvmf/common.sh@47 -- # : 0 00:11:31.616 00:45:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:31.616 00:45:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:31.616 00:45:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:31.616 00:45:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:31.616 00:45:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:31.616 00:45:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:31.616 00:45:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:31.616 00:45:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:31.616 00:45:24 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:31.616 00:45:24 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:31.616 00:45:24 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:31.616 00:45:24 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:31.616 00:45:24 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:31.616 00:45:24 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:31.616 00:45:24 -- target/referrals.sh@37 -- # nvmftestinit 00:11:31.616 00:45:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:31.616 00:45:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:31.616 00:45:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:31.616 00:45:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:31.616 00:45:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:31.616 00:45:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:31.616 00:45:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:31.616 00:45:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:31.616 00:45:24 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:31.616 00:45:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:31.616 00:45:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:31.616 00:45:24 -- common/autotest_common.sh@10 -- # set +x 00:11:36.893 00:45:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:36.893 00:45:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:36.893 00:45:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:36.893 00:45:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:36.893 00:45:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:36.893 00:45:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:36.893 00:45:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:36.893 00:45:29 -- nvmf/common.sh@295 -- # net_devs=() 00:11:36.893 00:45:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:36.893 00:45:29 -- nvmf/common.sh@296 -- # e810=() 00:11:36.893 00:45:29 -- nvmf/common.sh@296 -- # local -ga e810 00:11:36.893 00:45:29 -- nvmf/common.sh@297 -- # x722=() 00:11:36.893 00:45:29 -- nvmf/common.sh@297 -- # local -ga x722 00:11:36.893 00:45:29 -- nvmf/common.sh@298 -- # mlx=() 00:11:36.893 00:45:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:36.893 00:45:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:36.893 00:45:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:36.893 00:45:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:36.893 00:45:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.893 00:45:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:36.893 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:36.893 00:45:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.893 00:45:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:36.893 00:45:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:36.894 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:36.894 00:45:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:36.894 00:45:29 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.894 00:45:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.894 00:45:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:36.894 00:45:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.894 00:45:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:36.894 Found net devices under 0000:27:00.0: cvl_0_0 00:11:36.894 00:45:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.894 00:45:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:36.894 00:45:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:36.894 00:45:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:36.894 00:45:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:36.894 00:45:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:36.894 Found net devices under 0000:27:00.1: cvl_0_1 00:11:36.894 00:45:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:36.894 00:45:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:36.894 00:45:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:36.894 00:45:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:36.894 00:45:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:36.894 00:45:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:36.894 00:45:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:36.894 00:45:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:36.894 00:45:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:36.894 00:45:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:36.894 00:45:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:36.894 00:45:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:36.894 00:45:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:36.894 00:45:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:36.894 00:45:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:36.894 00:45:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:36.894 00:45:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:36.894 00:45:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:36.894 00:45:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:36.894 00:45:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:36.894 00:45:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:36.894 00:45:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:37.155 00:45:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:37.155 00:45:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:37.155 00:45:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:37.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:37.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.652 ms 00:11:37.155 00:11:37.155 --- 10.0.0.2 ping statistics --- 00:11:37.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.155 rtt min/avg/max/mdev = 0.652/0.652/0.652/0.000 ms 00:11:37.155 00:45:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:37.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:37.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.332 ms 00:11:37.155 00:11:37.155 --- 10.0.0.1 ping statistics --- 00:11:37.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:37.155 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:11:37.155 00:45:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:37.155 00:45:29 -- nvmf/common.sh@411 -- # return 0 00:11:37.155 00:45:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:37.155 00:45:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:37.155 00:45:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:37.155 00:45:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:37.155 00:45:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:37.155 00:45:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:37.155 00:45:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:37.155 00:45:29 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:37.155 00:45:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:37.155 00:45:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:37.155 00:45:29 -- common/autotest_common.sh@10 -- # set +x 00:11:37.155 00:45:29 -- nvmf/common.sh@470 -- # nvmfpid=2643271 00:11:37.155 00:45:29 -- nvmf/common.sh@471 -- # waitforlisten 2643271 00:11:37.155 00:45:29 -- common/autotest_common.sh@817 -- # '[' -z 2643271 ']' 00:11:37.155 00:45:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:37.155 00:45:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:37.155 00:45:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:37.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:37.155 00:45:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:37.155 00:45:29 -- common/autotest_common.sh@10 -- # set +x 00:11:37.155 00:45:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:37.155 [2024-04-27 00:45:29.748090] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:37.155 [2024-04-27 00:45:29.748190] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:37.155 EAL: No free 2048 kB hugepages reported on node 1 00:11:37.416 [2024-04-27 00:45:29.878033] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:37.416 [2024-04-27 00:45:29.973850] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:37.416 [2024-04-27 00:45:29.973891] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:37.416 [2024-04-27 00:45:29.973903] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:37.416 [2024-04-27 00:45:29.973912] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:37.416 [2024-04-27 00:45:29.973919] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:37.416 [2024-04-27 00:45:29.974059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:37.416 [2024-04-27 00:45:29.974115] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:37.416 [2024-04-27 00:45:29.974102] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.416 [2024-04-27 00:45:29.974066] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:37.988 00:45:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:37.988 00:45:30 -- common/autotest_common.sh@850 -- # return 0 00:11:37.988 00:45:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:37.988 00:45:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:37.988 00:45:30 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 [2024-04-27 00:45:30.503136] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 [2024-04-27 00:45:30.519399] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.988 00:45:30 -- target/referrals.sh@48 -- # jq length 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:37.988 00:45:30 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:37.988 00:45:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:37.988 00:45:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:37.988 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:37.988 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:37.988 00:45:30 -- target/referrals.sh@21 -- # sort 00:11:37.988 00:45:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:37.988 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:37.988 00:45:30 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:37.988 00:45:30 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:37.988 00:45:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:37.988 00:45:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:37.988 00:45:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:37.988 00:45:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:37.988 00:45:30 -- target/referrals.sh@26 -- # sort 00:11:38.249 00:45:30 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:38.249 00:45:30 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:38.249 00:45:30 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:38.249 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.249 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.249 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.249 00:45:30 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:38.249 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.249 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.249 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.249 00:45:30 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:38.249 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.249 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.249 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.249 00:45:30 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.249 00:45:30 -- target/referrals.sh@56 -- # jq length 00:11:38.249 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.249 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.249 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.249 00:45:30 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:38.249 00:45:30 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:38.249 00:45:30 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.249 00:45:30 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.249 00:45:30 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.249 00:45:30 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.249 00:45:30 -- target/referrals.sh@26 -- # sort 00:11:38.511 00:45:30 -- target/referrals.sh@26 -- # echo 00:11:38.511 00:45:30 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:38.511 00:45:30 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:38.511 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.511 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.511 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.511 00:45:30 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.511 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.511 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.511 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.511 00:45:30 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:38.511 00:45:30 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.511 00:45:30 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.511 00:45:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.511 00:45:30 -- common/autotest_common.sh@10 -- # set +x 00:11:38.511 00:45:30 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.511 00:45:30 -- target/referrals.sh@21 -- # sort 00:11:38.511 00:45:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.511 00:45:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:38.511 00:45:31 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.511 00:45:31 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:38.511 00:45:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.511 00:45:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.511 00:45:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.511 00:45:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.511 00:45:31 -- target/referrals.sh@26 -- # sort 00:11:38.511 00:45:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:38.511 00:45:31 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:38.511 00:45:31 -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:38.511 00:45:31 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:38.511 00:45:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:38.511 00:45:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.511 00:45:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:38.772 00:45:31 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:38.772 00:45:31 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:38.772 00:45:31 -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:38.772 00:45:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:38.772 00:45:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.772 00:45:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:38.772 00:45:31 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:38.772 00:45:31 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:38.772 00:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.772 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:38.772 00:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.772 00:45:31 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:38.772 00:45:31 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:38.772 00:45:31 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:38.772 00:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:38.772 00:45:31 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:38.772 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:38.772 00:45:31 -- target/referrals.sh@21 -- # sort 00:11:38.772 00:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:38.772 00:45:31 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:38.772 00:45:31 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:38.772 00:45:31 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:38.772 00:45:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:38.772 00:45:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:38.772 00:45:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:38.772 00:45:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:38.772 00:45:31 -- target/referrals.sh@26 -- # sort 00:11:39.033 00:45:31 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:39.033 00:45:31 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:39.033 00:45:31 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:39.033 00:45:31 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:39.033 00:45:31 -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:39.033 00:45:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.033 00:45:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:39.033 00:45:31 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:39.033 00:45:31 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:39.033 00:45:31 -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:39.033 00:45:31 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:39.033 00:45:31 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.033 00:45:31 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:39.292 00:45:31 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:39.293 00:45:31 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:39.293 00:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:39.293 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:39.293 00:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:39.293 00:45:31 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:39.293 00:45:31 -- target/referrals.sh@82 -- # jq length 00:11:39.293 00:45:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:39.293 00:45:31 -- common/autotest_common.sh@10 -- # set +x 00:11:39.293 00:45:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:39.293 00:45:31 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:39.293 00:45:31 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:39.293 00:45:31 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:39.293 00:45:31 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:39.293 00:45:31 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:39.293 00:45:31 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:39.293 00:45:31 -- target/referrals.sh@26 -- # sort 00:11:39.293 00:45:31 -- target/referrals.sh@26 -- # echo 00:11:39.293 00:45:31 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:39.293 00:45:31 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:39.293 00:45:31 -- target/referrals.sh@86 -- # nvmftestfini 00:11:39.293 00:45:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:11:39.293 00:45:31 -- nvmf/common.sh@117 -- # sync 00:11:39.293 00:45:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.293 00:45:31 -- nvmf/common.sh@120 -- # set +e 00:11:39.293 00:45:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.293 00:45:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.293 rmmod nvme_tcp 00:11:39.293 rmmod nvme_fabrics 00:11:39.293 rmmod nvme_keyring 00:11:39.551 00:45:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.551 00:45:31 -- nvmf/common.sh@124 -- # set -e 00:11:39.551 00:45:31 -- nvmf/common.sh@125 -- # return 0 00:11:39.551 00:45:31 -- nvmf/common.sh@478 -- # '[' -n 2643271 ']' 00:11:39.551 00:45:31 -- nvmf/common.sh@479 -- # killprocess 2643271 00:11:39.551 00:45:31 -- common/autotest_common.sh@936 -- # '[' -z 2643271 ']' 00:11:39.551 00:45:31 -- common/autotest_common.sh@940 -- # kill -0 2643271 00:11:39.551 00:45:31 -- common/autotest_common.sh@941 -- # uname 00:11:39.551 00:45:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:11:39.551 00:45:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2643271 00:11:39.551 00:45:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:11:39.551 00:45:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:11:39.551 00:45:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2643271' 00:11:39.551 killing process with pid 2643271 00:11:39.551 00:45:32 -- common/autotest_common.sh@955 -- # kill 2643271 00:11:39.551 00:45:32 -- common/autotest_common.sh@960 -- # wait 2643271 00:11:39.810 00:45:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:11:39.810 00:45:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:11:39.810 00:45:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:11:39.810 00:45:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:40.070 00:45:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:40.070 00:45:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.070 00:45:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:40.070 00:45:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.975 00:45:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.975 00:11:41.975 real 0m10.479s 00:11:41.975 user 0m11.901s 00:11:41.975 sys 0m4.654s 00:11:41.975 00:45:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:11:41.975 00:45:34 -- common/autotest_common.sh@10 -- # set +x 00:11:41.975 ************************************ 00:11:41.975 END TEST nvmf_referrals 00:11:41.975 ************************************ 00:11:41.975 00:45:34 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:41.975 00:45:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:11:41.975 00:45:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:11:41.975 00:45:34 -- common/autotest_common.sh@10 -- # set +x 00:11:42.237 ************************************ 00:11:42.237 START TEST nvmf_connect_disconnect 00:11:42.237 ************************************ 00:11:42.237 00:45:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:42.237 * Looking for test storage... 00:11:42.237 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:11:42.237 00:45:34 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:11:42.237 00:45:34 -- nvmf/common.sh@7 -- # uname -s 00:11:42.237 00:45:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:42.237 00:45:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:42.237 00:45:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:42.237 00:45:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:42.237 00:45:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:42.237 00:45:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:42.237 00:45:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:42.237 00:45:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:42.237 00:45:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:42.237 00:45:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:42.237 00:45:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:11:42.237 00:45:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:11:42.237 00:45:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:42.237 00:45:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:42.237 00:45:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:11:42.237 00:45:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:42.237 00:45:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:11:42.237 00:45:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:42.237 00:45:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:42.237 00:45:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:42.237 00:45:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.237 00:45:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.237 00:45:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.237 00:45:34 -- paths/export.sh@5 -- # export PATH 00:11:42.237 00:45:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:42.237 00:45:34 -- nvmf/common.sh@47 -- # : 0 00:11:42.237 00:45:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:42.237 00:45:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:42.237 00:45:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:42.237 00:45:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:42.237 00:45:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:42.237 00:45:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:42.237 00:45:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:42.237 00:45:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:42.237 00:45:34 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:42.237 00:45:34 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:42.237 00:45:34 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:42.237 00:45:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:11:42.237 00:45:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:42.237 00:45:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:11:42.237 00:45:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:11:42.237 00:45:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:11:42.237 00:45:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:42.237 00:45:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:42.237 00:45:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:42.237 00:45:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:11:42.237 00:45:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:11:42.237 00:45:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:11:42.237 00:45:34 -- common/autotest_common.sh@10 -- # set +x 00:11:47.556 00:45:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:11:47.556 00:45:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.556 00:45:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.556 00:45:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.556 00:45:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.556 00:45:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.556 00:45:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.556 00:45:39 -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.556 00:45:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.557 00:45:39 -- nvmf/common.sh@296 -- # e810=() 00:11:47.557 00:45:39 -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.557 00:45:39 -- nvmf/common.sh@297 -- # x722=() 00:11:47.557 00:45:39 -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.557 00:45:39 -- nvmf/common.sh@298 -- # mlx=() 00:11:47.557 00:45:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.557 00:45:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.557 00:45:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.557 00:45:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.557 00:45:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:11:47.557 Found 0000:27:00.0 (0x8086 - 0x159b) 00:11:47.557 00:45:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.557 00:45:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:11:47.557 Found 0000:27:00.1 (0x8086 - 0x159b) 00:11:47.557 00:45:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.557 00:45:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.557 00:45:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.557 00:45:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:11:47.557 Found net devices under 0000:27:00.0: cvl_0_0 00:11:47.557 00:45:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.557 00:45:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.557 00:45:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.557 00:45:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.557 00:45:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:11:47.557 Found net devices under 0000:27:00.1: cvl_0_1 00:11:47.557 00:45:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.557 00:45:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:11:47.557 00:45:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:11:47.557 00:45:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:11:47.557 00:45:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.557 00:45:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.557 00:45:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.557 00:45:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.557 00:45:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.557 00:45:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.557 00:45:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.557 00:45:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.557 00:45:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.557 00:45:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.557 00:45:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.557 00:45:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.557 00:45:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.557 00:45:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.557 00:45:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.557 00:45:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.557 00:45:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.557 00:45:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.557 00:45:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.557 00:45:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.557 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.557 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.469 ms 00:11:47.557 00:11:47.557 --- 10.0.0.2 ping statistics --- 00:11:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.557 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:11:47.557 00:45:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.557 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.557 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:47.557 00:11:47.557 --- 10.0.0.1 ping statistics --- 00:11:47.557 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.557 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:47.557 00:45:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.557 00:45:40 -- nvmf/common.sh@411 -- # return 0 00:11:47.557 00:45:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:11:47.557 00:45:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.557 00:45:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:11:47.557 00:45:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:11:47.557 00:45:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.557 00:45:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:11:47.557 00:45:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:11:47.557 00:45:40 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:47.557 00:45:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:11:47.557 00:45:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:11:47.557 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:11:47.557 00:45:40 -- nvmf/common.sh@470 -- # nvmfpid=2647862 00:11:47.557 00:45:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.557 00:45:40 -- nvmf/common.sh@471 -- # waitforlisten 2647862 00:11:47.557 00:45:40 -- common/autotest_common.sh@817 -- # '[' -z 2647862 ']' 00:11:47.557 00:45:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.557 00:45:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:11:47.557 00:45:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.557 00:45:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:11:47.557 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:11:47.557 [2024-04-27 00:45:40.217792] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:11:47.557 [2024-04-27 00:45:40.217856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.818 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.818 [2024-04-27 00:45:40.307763] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.818 [2024-04-27 00:45:40.401578] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.818 [2024-04-27 00:45:40.401616] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.818 [2024-04-27 00:45:40.401627] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.818 [2024-04-27 00:45:40.401636] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.818 [2024-04-27 00:45:40.401643] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.818 [2024-04-27 00:45:40.401756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.818 [2024-04-27 00:45:40.401833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.818 [2024-04-27 00:45:40.401850] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.818 [2024-04-27 00:45:40.401853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:11:48.389 00:45:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:11:48.389 00:45:40 -- common/autotest_common.sh@850 -- # return 0 00:11:48.389 00:45:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:11:48.389 00:45:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:11:48.389 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 00:45:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.389 00:45:40 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:48.389 00:45:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.389 00:45:40 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 [2024-04-27 00:45:40.996194] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.389 00:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:48.389 00:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.389 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 00:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.389 00:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.389 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 00:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:48.389 00:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.389 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 00:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.389 00:45:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:11:48.389 00:45:41 -- common/autotest_common.sh@10 -- # set +x 00:11:48.389 [2024-04-27 00:45:41.066513] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.389 00:45:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:48.389 00:45:41 -- target/connect_disconnect.sh@34 -- # set +x 00:11:52.582 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:55.874 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.348 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.633 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:06.633 00:45:58 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:06.633 00:45:58 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:06.633 00:45:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:06.633 00:45:58 -- nvmf/common.sh@117 -- # sync 00:12:06.633 00:45:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:06.633 00:45:58 -- nvmf/common.sh@120 -- # set +e 00:12:06.633 00:45:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:06.633 00:45:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:06.633 rmmod nvme_tcp 00:12:06.633 rmmod nvme_fabrics 00:12:06.633 rmmod nvme_keyring 00:12:06.633 00:45:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:06.633 00:45:58 -- nvmf/common.sh@124 -- # set -e 00:12:06.633 00:45:58 -- nvmf/common.sh@125 -- # return 0 00:12:06.633 00:45:58 -- nvmf/common.sh@478 -- # '[' -n 2647862 ']' 00:12:06.633 00:45:58 -- nvmf/common.sh@479 -- # killprocess 2647862 00:12:06.633 00:45:58 -- common/autotest_common.sh@936 -- # '[' -z 2647862 ']' 00:12:06.633 00:45:58 -- common/autotest_common.sh@940 -- # kill -0 2647862 00:12:06.633 00:45:58 -- common/autotest_common.sh@941 -- # uname 00:12:06.633 00:45:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:06.633 00:45:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2647862 00:12:06.633 00:45:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:06.633 00:45:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:06.633 00:45:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2647862' 00:12:06.633 killing process with pid 2647862 00:12:06.633 00:45:58 -- common/autotest_common.sh@955 -- # kill 2647862 00:12:06.633 00:45:58 -- common/autotest_common.sh@960 -- # wait 2647862 00:12:06.893 00:45:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:06.893 00:45:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:06.893 00:45:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:06.893 00:45:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:06.893 00:45:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:06.893 00:45:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:06.893 00:45:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:06.893 00:45:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.425 00:46:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:09.425 00:12:09.425 real 0m26.856s 00:12:09.425 user 1m16.918s 00:12:09.425 sys 0m4.887s 00:12:09.425 00:46:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:09.425 00:46:01 -- common/autotest_common.sh@10 -- # set +x 00:12:09.425 ************************************ 00:12:09.425 END TEST nvmf_connect_disconnect 00:12:09.425 ************************************ 00:12:09.425 00:46:01 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:09.425 00:46:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:09.425 00:46:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:09.425 00:46:01 -- common/autotest_common.sh@10 -- # set +x 00:12:09.425 ************************************ 00:12:09.425 START TEST nvmf_multitarget 00:12:09.425 ************************************ 00:12:09.425 00:46:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:09.425 * Looking for test storage... 00:12:09.425 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:09.425 00:46:01 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.425 00:46:01 -- nvmf/common.sh@7 -- # uname -s 00:12:09.425 00:46:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.425 00:46:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.425 00:46:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.425 00:46:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.425 00:46:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:09.425 00:46:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:09.425 00:46:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.425 00:46:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:09.425 00:46:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.425 00:46:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:09.425 00:46:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:12:09.425 00:46:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:12:09.425 00:46:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.425 00:46:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:09.425 00:46:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:09.425 00:46:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.425 00:46:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:09.425 00:46:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.425 00:46:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.425 00:46:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.425 00:46:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.425 00:46:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.425 00:46:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.425 00:46:01 -- paths/export.sh@5 -- # export PATH 00:12:09.425 00:46:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.425 00:46:01 -- nvmf/common.sh@47 -- # : 0 00:12:09.425 00:46:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:09.425 00:46:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:09.425 00:46:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:09.425 00:46:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.425 00:46:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.425 00:46:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:09.425 00:46:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:09.425 00:46:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:09.425 00:46:01 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:09.425 00:46:01 -- target/multitarget.sh@15 -- # nvmftestinit 00:12:09.425 00:46:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:09.425 00:46:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.425 00:46:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:09.425 00:46:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:09.425 00:46:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:09.425 00:46:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.425 00:46:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.426 00:46:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:09.426 00:46:01 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:09.426 00:46:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:09.426 00:46:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:09.426 00:46:01 -- common/autotest_common.sh@10 -- # set +x 00:12:14.703 00:46:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:14.703 00:46:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.703 00:46:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.703 00:46:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.703 00:46:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.703 00:46:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.703 00:46:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.703 00:46:07 -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.703 00:46:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.704 00:46:07 -- nvmf/common.sh@296 -- # e810=() 00:12:14.704 00:46:07 -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.704 00:46:07 -- nvmf/common.sh@297 -- # x722=() 00:12:14.704 00:46:07 -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.704 00:46:07 -- nvmf/common.sh@298 -- # mlx=() 00:12:14.704 00:46:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.704 00:46:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.704 00:46:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.704 00:46:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.704 00:46:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:14.704 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:14.704 00:46:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.704 00:46:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:14.704 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:14.704 00:46:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.704 00:46:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.704 00:46:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.704 00:46:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:14.704 Found net devices under 0000:27:00.0: cvl_0_0 00:12:14.704 00:46:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.704 00:46:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.704 00:46:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.704 00:46:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.704 00:46:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:14.704 Found net devices under 0000:27:00.1: cvl_0_1 00:12:14.704 00:46:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.704 00:46:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:14.704 00:46:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:14.704 00:46:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.704 00:46:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.704 00:46:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.704 00:46:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.704 00:46:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.704 00:46:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.704 00:46:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.704 00:46:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.704 00:46:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.704 00:46:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.704 00:46:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.704 00:46:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.704 00:46:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.704 00:46:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.704 00:46:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.704 00:46:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.704 00:46:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.704 00:46:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.704 00:46:07 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.704 00:46:07 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.704 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.704 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.610 ms 00:12:14.704 00:12:14.704 --- 10.0.0.2 ping statistics --- 00:12:14.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.704 rtt min/avg/max/mdev = 0.610/0.610/0.610/0.000 ms 00:12:14.704 00:46:07 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.704 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.704 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.177 ms 00:12:14.704 00:12:14.704 --- 10.0.0.1 ping statistics --- 00:12:14.704 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.704 rtt min/avg/max/mdev = 0.177/0.177/0.177/0.000 ms 00:12:14.704 00:46:07 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.704 00:46:07 -- nvmf/common.sh@411 -- # return 0 00:12:14.704 00:46:07 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:14.704 00:46:07 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.704 00:46:07 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:14.704 00:46:07 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.704 00:46:07 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:14.704 00:46:07 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:14.704 00:46:07 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:14.704 00:46:07 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:14.704 00:46:07 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:14.704 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 00:46:07 -- nvmf/common.sh@470 -- # nvmfpid=2655579 00:12:14.704 00:46:07 -- nvmf/common.sh@471 -- # waitforlisten 2655579 00:12:14.704 00:46:07 -- common/autotest_common.sh@817 -- # '[' -z 2655579 ']' 00:12:14.704 00:46:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.704 00:46:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:14.704 00:46:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.704 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.704 00:46:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:14.704 00:46:07 -- common/autotest_common.sh@10 -- # set +x 00:12:14.704 00:46:07 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.963 [2024-04-27 00:46:07.420043] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:14.963 [2024-04-27 00:46:07.420147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.963 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.963 [2024-04-27 00:46:07.542938] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.963 [2024-04-27 00:46:07.639129] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.963 [2024-04-27 00:46:07.639168] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.963 [2024-04-27 00:46:07.639179] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.963 [2024-04-27 00:46:07.639188] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.963 [2024-04-27 00:46:07.639195] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.963 [2024-04-27 00:46:07.639325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.963 [2024-04-27 00:46:07.639419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.963 [2024-04-27 00:46:07.639455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.963 [2024-04-27 00:46:07.639467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.598 00:46:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:15.598 00:46:08 -- common/autotest_common.sh@850 -- # return 0 00:12:15.598 00:46:08 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:15.598 00:46:08 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:15.598 00:46:08 -- common/autotest_common.sh@10 -- # set +x 00:12:15.598 00:46:08 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.598 00:46:08 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:15.598 00:46:08 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.598 00:46:08 -- target/multitarget.sh@21 -- # jq length 00:12:15.598 00:46:08 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:15.598 00:46:08 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:15.859 "nvmf_tgt_1" 00:12:15.859 00:46:08 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:15.859 "nvmf_tgt_2" 00:12:15.859 00:46:08 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:15.859 00:46:08 -- target/multitarget.sh@28 -- # jq length 00:12:15.859 00:46:08 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:15.859 00:46:08 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:16.118 true 00:12:16.118 00:46:08 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:16.118 true 00:12:16.118 00:46:08 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:16.118 00:46:08 -- target/multitarget.sh@35 -- # jq length 00:12:16.118 00:46:08 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:16.118 00:46:08 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:16.118 00:46:08 -- target/multitarget.sh@41 -- # nvmftestfini 00:12:16.118 00:46:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:16.118 00:46:08 -- nvmf/common.sh@117 -- # sync 00:12:16.118 00:46:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.118 00:46:08 -- nvmf/common.sh@120 -- # set +e 00:12:16.118 00:46:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.118 00:46:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.118 rmmod nvme_tcp 00:12:16.118 rmmod nvme_fabrics 00:12:16.118 rmmod nvme_keyring 00:12:16.377 00:46:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.377 00:46:08 -- nvmf/common.sh@124 -- # set -e 00:12:16.377 00:46:08 -- nvmf/common.sh@125 -- # return 0 00:12:16.377 00:46:08 -- nvmf/common.sh@478 -- # '[' -n 2655579 ']' 00:12:16.377 00:46:08 -- nvmf/common.sh@479 -- # killprocess 2655579 00:12:16.377 00:46:08 -- common/autotest_common.sh@936 -- # '[' -z 2655579 ']' 00:12:16.377 00:46:08 -- common/autotest_common.sh@940 -- # kill -0 2655579 00:12:16.377 00:46:08 -- common/autotest_common.sh@941 -- # uname 00:12:16.377 00:46:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:16.377 00:46:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2655579 00:12:16.377 00:46:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:16.377 00:46:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:16.377 00:46:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2655579' 00:12:16.377 killing process with pid 2655579 00:12:16.377 00:46:08 -- common/autotest_common.sh@955 -- # kill 2655579 00:12:16.377 00:46:08 -- common/autotest_common.sh@960 -- # wait 2655579 00:12:16.636 00:46:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:16.636 00:46:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:16.636 00:46:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:16.636 00:46:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.636 00:46:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.636 00:46:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.636 00:46:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:16.636 00:46:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.174 00:46:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.174 00:12:19.174 real 0m9.714s 00:12:19.174 user 0m8.589s 00:12:19.174 sys 0m4.589s 00:12:19.174 00:46:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:19.174 00:46:11 -- common/autotest_common.sh@10 -- # set +x 00:12:19.174 ************************************ 00:12:19.174 END TEST nvmf_multitarget 00:12:19.174 ************************************ 00:12:19.174 00:46:11 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.174 00:46:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:19.174 00:46:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:19.174 00:46:11 -- common/autotest_common.sh@10 -- # set +x 00:12:19.174 ************************************ 00:12:19.174 START TEST nvmf_rpc 00:12:19.174 ************************************ 00:12:19.174 00:46:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:19.174 * Looking for test storage... 00:12:19.174 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:19.174 00:46:11 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.174 00:46:11 -- nvmf/common.sh@7 -- # uname -s 00:12:19.174 00:46:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.174 00:46:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.174 00:46:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.174 00:46:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.174 00:46:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.174 00:46:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.174 00:46:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.174 00:46:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.174 00:46:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.174 00:46:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.174 00:46:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:12:19.174 00:46:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:12:19.174 00:46:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.174 00:46:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.174 00:46:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:19.174 00:46:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.174 00:46:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:19.174 00:46:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.174 00:46:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.174 00:46:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.174 00:46:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.174 00:46:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.174 00:46:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.174 00:46:11 -- paths/export.sh@5 -- # export PATH 00:12:19.175 00:46:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.175 00:46:11 -- nvmf/common.sh@47 -- # : 0 00:12:19.175 00:46:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.175 00:46:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.175 00:46:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.175 00:46:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.175 00:46:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.175 00:46:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.175 00:46:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.175 00:46:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.175 00:46:11 -- target/rpc.sh@11 -- # loops=5 00:12:19.175 00:46:11 -- target/rpc.sh@23 -- # nvmftestinit 00:12:19.175 00:46:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:19.175 00:46:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.175 00:46:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:19.175 00:46:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:19.175 00:46:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:19.175 00:46:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.175 00:46:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:19.175 00:46:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.175 00:46:11 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:19.175 00:46:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:19.175 00:46:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.175 00:46:11 -- common/autotest_common.sh@10 -- # set +x 00:12:25.768 00:46:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:12:25.768 00:46:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:12:25.768 00:46:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:25.768 00:46:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:25.768 00:46:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:25.768 00:46:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:25.768 00:46:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:25.768 00:46:17 -- nvmf/common.sh@295 -- # net_devs=() 00:12:25.768 00:46:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:25.768 00:46:17 -- nvmf/common.sh@296 -- # e810=() 00:12:25.768 00:46:17 -- nvmf/common.sh@296 -- # local -ga e810 00:12:25.768 00:46:17 -- nvmf/common.sh@297 -- # x722=() 00:12:25.768 00:46:17 -- nvmf/common.sh@297 -- # local -ga x722 00:12:25.768 00:46:17 -- nvmf/common.sh@298 -- # mlx=() 00:12:25.768 00:46:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:12:25.768 00:46:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:25.768 00:46:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:25.768 00:46:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.768 00:46:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:12:25.768 Found 0000:27:00.0 (0x8086 - 0x159b) 00:12:25.768 00:46:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:25.768 00:46:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:12:25.768 Found 0000:27:00.1 (0x8086 - 0x159b) 00:12:25.768 00:46:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.768 00:46:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.768 00:46:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.768 00:46:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:12:25.768 Found net devices under 0000:27:00.0: cvl_0_0 00:12:25.768 00:46:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.768 00:46:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:25.768 00:46:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:25.768 00:46:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:25.768 00:46:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:12:25.768 Found net devices under 0000:27:00.1: cvl_0_1 00:12:25.768 00:46:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:12:25.768 00:46:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:12:25.768 00:46:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:12:25.768 00:46:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:12:25.768 00:46:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:25.768 00:46:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:25.768 00:46:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:25.768 00:46:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:25.768 00:46:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:25.768 00:46:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:25.768 00:46:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:25.768 00:46:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:25.768 00:46:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:25.768 00:46:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:25.768 00:46:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:25.768 00:46:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:25.768 00:46:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:25.768 00:46:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:25.768 00:46:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:25.768 00:46:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:25.768 00:46:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:25.769 00:46:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:25.769 00:46:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:25.769 00:46:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:25.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:25.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:12:25.769 00:12:25.769 --- 10.0.0.2 ping statistics --- 00:12:25.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.769 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:12:25.769 00:46:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:25.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:25.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:12:25.769 00:12:25.769 --- 10.0.0.1 ping statistics --- 00:12:25.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:25.769 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:12:25.769 00:46:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:25.769 00:46:17 -- nvmf/common.sh@411 -- # return 0 00:12:25.769 00:46:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:12:25.769 00:46:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:25.769 00:46:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:12:25.769 00:46:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:12:25.769 00:46:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:25.769 00:46:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:12:25.769 00:46:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:12:25.769 00:46:17 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:25.769 00:46:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:12:25.769 00:46:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:12:25.769 00:46:17 -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 00:46:17 -- nvmf/common.sh@470 -- # nvmfpid=2659984 00:12:25.769 00:46:17 -- nvmf/common.sh@471 -- # waitforlisten 2659984 00:12:25.769 00:46:17 -- common/autotest_common.sh@817 -- # '[' -z 2659984 ']' 00:12:25.769 00:46:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:25.769 00:46:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:12:25.769 00:46:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:25.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:25.769 00:46:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:12:25.769 00:46:17 -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 00:46:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:25.769 [2024-04-27 00:46:17.629072] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:12:25.769 [2024-04-27 00:46:17.629174] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:25.769 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.769 [2024-04-27 00:46:17.757636] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:25.769 [2024-04-27 00:46:17.852960] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:25.769 [2024-04-27 00:46:17.852999] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:25.769 [2024-04-27 00:46:17.853010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:25.769 [2024-04-27 00:46:17.853020] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:25.769 [2024-04-27 00:46:17.853027] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:25.769 [2024-04-27 00:46:17.853240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:12:25.769 [2024-04-27 00:46:17.853269] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:12:25.769 [2024-04-27 00:46:17.853245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:12:25.769 [2024-04-27 00:46:17.853281] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:12:25.769 00:46:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:12:25.769 00:46:18 -- common/autotest_common.sh@850 -- # return 0 00:12:25.769 00:46:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:12:25.769 00:46:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:12:25.769 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 00:46:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:25.769 00:46:18 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:25.769 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:25.769 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:25.769 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:25.769 00:46:18 -- target/rpc.sh@26 -- # stats='{ 00:12:25.769 "tick_rate": 1900000000, 00:12:25.769 "poll_groups": [ 00:12:25.769 { 00:12:25.769 "name": "nvmf_tgt_poll_group_0", 00:12:25.769 "admin_qpairs": 0, 00:12:25.769 "io_qpairs": 0, 00:12:25.769 "current_admin_qpairs": 0, 00:12:25.769 "current_io_qpairs": 0, 00:12:25.769 "pending_bdev_io": 0, 00:12:25.769 "completed_nvme_io": 0, 00:12:25.769 "transports": [] 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "nvmf_tgt_poll_group_1", 00:12:25.769 "admin_qpairs": 0, 00:12:25.769 "io_qpairs": 0, 00:12:25.769 "current_admin_qpairs": 0, 00:12:25.769 "current_io_qpairs": 0, 00:12:25.769 "pending_bdev_io": 0, 00:12:25.769 "completed_nvme_io": 0, 00:12:25.769 "transports": [] 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "nvmf_tgt_poll_group_2", 00:12:25.769 "admin_qpairs": 0, 00:12:25.769 "io_qpairs": 0, 00:12:25.769 "current_admin_qpairs": 0, 00:12:25.769 "current_io_qpairs": 0, 00:12:25.769 "pending_bdev_io": 0, 00:12:25.769 "completed_nvme_io": 0, 00:12:25.769 "transports": [] 00:12:25.769 }, 00:12:25.769 { 00:12:25.769 "name": "nvmf_tgt_poll_group_3", 00:12:25.769 "admin_qpairs": 0, 00:12:25.769 "io_qpairs": 0, 00:12:25.769 "current_admin_qpairs": 0, 00:12:25.769 "current_io_qpairs": 0, 00:12:25.769 "pending_bdev_io": 0, 00:12:25.769 "completed_nvme_io": 0, 00:12:25.769 "transports": [] 00:12:25.769 } 00:12:25.769 ] 00:12:25.769 }' 00:12:25.769 00:46:18 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:25.769 00:46:18 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:25.769 00:46:18 -- target/rpc.sh@15 -- # wc -l 00:12:25.769 00:46:18 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:25.769 00:46:18 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:25.769 00:46:18 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:26.028 00:46:18 -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:26.028 00:46:18 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 [2024-04-27 00:46:18.485892] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@33 -- # stats='{ 00:12:26.028 "tick_rate": 1900000000, 00:12:26.028 "poll_groups": [ 00:12:26.028 { 00:12:26.028 "name": "nvmf_tgt_poll_group_0", 00:12:26.028 "admin_qpairs": 0, 00:12:26.028 "io_qpairs": 0, 00:12:26.028 "current_admin_qpairs": 0, 00:12:26.028 "current_io_qpairs": 0, 00:12:26.028 "pending_bdev_io": 0, 00:12:26.028 "completed_nvme_io": 0, 00:12:26.028 "transports": [ 00:12:26.028 { 00:12:26.028 "trtype": "TCP" 00:12:26.028 } 00:12:26.028 ] 00:12:26.028 }, 00:12:26.028 { 00:12:26.028 "name": "nvmf_tgt_poll_group_1", 00:12:26.028 "admin_qpairs": 0, 00:12:26.028 "io_qpairs": 0, 00:12:26.028 "current_admin_qpairs": 0, 00:12:26.028 "current_io_qpairs": 0, 00:12:26.028 "pending_bdev_io": 0, 00:12:26.028 "completed_nvme_io": 0, 00:12:26.028 "transports": [ 00:12:26.028 { 00:12:26.028 "trtype": "TCP" 00:12:26.028 } 00:12:26.028 ] 00:12:26.028 }, 00:12:26.028 { 00:12:26.028 "name": "nvmf_tgt_poll_group_2", 00:12:26.028 "admin_qpairs": 0, 00:12:26.028 "io_qpairs": 0, 00:12:26.028 "current_admin_qpairs": 0, 00:12:26.028 "current_io_qpairs": 0, 00:12:26.028 "pending_bdev_io": 0, 00:12:26.028 "completed_nvme_io": 0, 00:12:26.028 "transports": [ 00:12:26.028 { 00:12:26.028 "trtype": "TCP" 00:12:26.028 } 00:12:26.028 ] 00:12:26.028 }, 00:12:26.028 { 00:12:26.028 "name": "nvmf_tgt_poll_group_3", 00:12:26.028 "admin_qpairs": 0, 00:12:26.028 "io_qpairs": 0, 00:12:26.028 "current_admin_qpairs": 0, 00:12:26.028 "current_io_qpairs": 0, 00:12:26.028 "pending_bdev_io": 0, 00:12:26.028 "completed_nvme_io": 0, 00:12:26.028 "transports": [ 00:12:26.028 { 00:12:26.028 "trtype": "TCP" 00:12:26.028 } 00:12:26.028 ] 00:12:26.028 } 00:12:26.028 ] 00:12:26.028 }' 00:12:26.028 00:46:18 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.028 00:46:18 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:26.028 00:46:18 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:26.028 00:46:18 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:26.028 00:46:18 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:26.028 00:46:18 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:26.028 00:46:18 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:26.028 00:46:18 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:26.028 00:46:18 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 Malloc1 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.028 00:46:18 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.028 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.028 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.028 [2024-04-27 00:46:18.650732] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.028 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.029 00:46:18 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.2 -s 4420 00:12:26.029 00:46:18 -- common/autotest_common.sh@638 -- # local es=0 00:12:26.029 00:46:18 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.2 -s 4420 00:12:26.029 00:46:18 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:26.029 00:46:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.029 00:46:18 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:26.029 00:46:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.029 00:46:18 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:26.029 00:46:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:26.029 00:46:18 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:26.029 00:46:18 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:26.029 00:46:18 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.2 -s 4420 00:12:26.029 [2024-04-27 00:46:18.679716] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea' 00:12:26.029 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:26.029 could not add new controller: failed to write to nvme-fabrics device 00:12:26.029 00:46:18 -- common/autotest_common.sh@641 -- # es=1 00:12:26.029 00:46:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:26.029 00:46:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:26.029 00:46:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:26.029 00:46:18 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:12:26.029 00:46:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:26.029 00:46:18 -- common/autotest_common.sh@10 -- # set +x 00:12:26.029 00:46:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:26.029 00:46:18 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:27.937 00:46:20 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:27.937 00:46:20 -- common/autotest_common.sh@1184 -- # local i=0 00:12:27.937 00:46:20 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:27.937 00:46:20 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:27.937 00:46:20 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:29.849 00:46:22 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:29.849 00:46:22 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:29.849 00:46:22 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.849 00:46:22 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:29.849 00:46:22 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.849 00:46:22 -- common/autotest_common.sh@1194 -- # return 0 00:12:29.849 00:46:22 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.849 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.849 00:46:22 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.849 00:46:22 -- common/autotest_common.sh@1205 -- # local i=0 00:12:29.849 00:46:22 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:29.849 00:46:22 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.849 00:46:22 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:29.849 00:46:22 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.849 00:46:22 -- common/autotest_common.sh@1217 -- # return 0 00:12:29.849 00:46:22 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:12:29.849 00:46:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:29.849 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:12:29.849 00:46:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:29.849 00:46:22 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.849 00:46:22 -- common/autotest_common.sh@638 -- # local es=0 00:12:29.849 00:46:22 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.849 00:46:22 -- common/autotest_common.sh@626 -- # local arg=nvme 00:12:29.849 00:46:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.849 00:46:22 -- common/autotest_common.sh@630 -- # type -t nvme 00:12:29.849 00:46:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.849 00:46:22 -- common/autotest_common.sh@632 -- # type -P nvme 00:12:29.849 00:46:22 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:12:29.849 00:46:22 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:12:29.849 00:46:22 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:12:29.849 00:46:22 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:29.849 [2024-04-27 00:46:22.402531] ctrlr.c: 766:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea' 00:12:29.849 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:29.849 could not add new controller: failed to write to nvme-fabrics device 00:12:29.849 00:46:22 -- common/autotest_common.sh@641 -- # es=1 00:12:29.849 00:46:22 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:12:29.849 00:46:22 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:12:29.849 00:46:22 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:12:29.849 00:46:22 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:29.849 00:46:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:29.849 00:46:22 -- common/autotest_common.sh@10 -- # set +x 00:12:29.849 00:46:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:29.849 00:46:22 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.229 00:46:23 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.229 00:46:23 -- common/autotest_common.sh@1184 -- # local i=0 00:12:31.229 00:46:23 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.229 00:46:23 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:31.229 00:46:23 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:33.139 00:46:25 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:33.139 00:46:25 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:33.139 00:46:25 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.399 00:46:25 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:33.399 00:46:25 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.399 00:46:25 -- common/autotest_common.sh@1194 -- # return 0 00:12:33.399 00:46:25 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.399 00:46:26 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.399 00:46:26 -- common/autotest_common.sh@1205 -- # local i=0 00:12:33.399 00:46:26 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:33.399 00:46:26 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.399 00:46:26 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.399 00:46:26 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:33.399 00:46:26 -- common/autotest_common.sh@1217 -- # return 0 00:12:33.399 00:46:26 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.399 00:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.399 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:33.399 00:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.399 00:46:26 -- target/rpc.sh@81 -- # seq 1 5 00:12:33.399 00:46:26 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.399 00:46:26 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.399 00:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.399 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:33.399 00:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.399 00:46:26 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.399 00:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.399 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:33.399 [2024-04-27 00:46:26.087675] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.399 00:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.399 00:46:26 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.399 00:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.399 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 00:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.657 00:46:26 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.657 00:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:33.657 00:46:26 -- common/autotest_common.sh@10 -- # set +x 00:12:33.657 00:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:33.657 00:46:26 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.035 00:46:27 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:35.035 00:46:27 -- common/autotest_common.sh@1184 -- # local i=0 00:12:35.035 00:46:27 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:35.035 00:46:27 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:35.035 00:46:27 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:36.940 00:46:29 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:36.940 00:46:29 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:36.940 00:46:29 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.940 00:46:29 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:36.940 00:46:29 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.940 00:46:29 -- common/autotest_common.sh@1194 -- # return 0 00:12:36.940 00:46:29 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.198 00:46:29 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.198 00:46:29 -- common/autotest_common.sh@1205 -- # local i=0 00:12:37.198 00:46:29 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:37.198 00:46:29 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.198 00:46:29 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:37.198 00:46:29 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.198 00:46:29 -- common/autotest_common.sh@1217 -- # return 0 00:12:37.198 00:46:29 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:37.198 00:46:29 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 [2024-04-27 00:46:29.850177] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:37.198 00:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:37.198 00:46:29 -- common/autotest_common.sh@10 -- # set +x 00:12:37.198 00:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:37.198 00:46:29 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:39.104 00:46:31 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:39.104 00:46:31 -- common/autotest_common.sh@1184 -- # local i=0 00:12:39.104 00:46:31 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:39.104 00:46:31 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:39.104 00:46:31 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:41.042 00:46:33 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:41.042 00:46:33 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:41.042 00:46:33 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.042 00:46:33 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:41.042 00:46:33 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.042 00:46:33 -- common/autotest_common.sh@1194 -- # return 0 00:12:41.042 00:46:33 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.042 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.042 00:46:33 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.042 00:46:33 -- common/autotest_common.sh@1205 -- # local i=0 00:12:41.042 00:46:33 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:41.042 00:46:33 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.042 00:46:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:41.042 00:46:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.042 00:46:33 -- common/autotest_common.sh@1217 -- # return 0 00:12:41.042 00:46:33 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.042 00:46:33 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 [2024-04-27 00:46:33.548161] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.042 00:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:41.042 00:46:33 -- common/autotest_common.sh@10 -- # set +x 00:12:41.042 00:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:41.042 00:46:33 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:42.421 00:46:34 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:42.421 00:46:34 -- common/autotest_common.sh@1184 -- # local i=0 00:12:42.421 00:46:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:42.421 00:46:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:42.421 00:46:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:44.325 00:46:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:44.325 00:46:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.325 00:46:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:44.325 00:46:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:44.325 00:46:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.325 00:46:36 -- common/autotest_common.sh@1194 -- # return 0 00:12:44.325 00:46:36 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.584 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.584 00:46:37 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.584 00:46:37 -- common/autotest_common.sh@1205 -- # local i=0 00:12:44.584 00:46:37 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:44.584 00:46:37 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.584 00:46:37 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:44.584 00:46:37 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.584 00:46:37 -- common/autotest_common.sh@1217 -- # return 0 00:12:44.584 00:46:37 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.584 00:46:37 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 [2024-04-27 00:46:37.185631] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.584 00:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:44.584 00:46:37 -- common/autotest_common.sh@10 -- # set +x 00:12:44.584 00:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:44.584 00:46:37 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:46.489 00:46:38 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:46.489 00:46:38 -- common/autotest_common.sh@1184 -- # local i=0 00:12:46.489 00:46:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:46.489 00:46:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:46.489 00:46:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:48.395 00:46:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:48.395 00:46:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:48.395 00:46:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:48.395 00:46:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:48.395 00:46:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:48.395 00:46:40 -- common/autotest_common.sh@1194 -- # return 0 00:12:48.395 00:46:40 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:48.395 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:48.395 00:46:40 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:48.395 00:46:40 -- common/autotest_common.sh@1205 -- # local i=0 00:12:48.395 00:46:40 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:48.395 00:46:40 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.395 00:46:40 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:48.395 00:46:40 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:48.395 00:46:40 -- common/autotest_common.sh@1217 -- # return 0 00:12:48.395 00:46:40 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:48.395 00:46:40 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 [2024-04-27 00:46:40.888259] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:48.395 00:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:48.395 00:46:40 -- common/autotest_common.sh@10 -- # set +x 00:12:48.395 00:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:48.395 00:46:40 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:49.773 00:46:42 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:49.773 00:46:42 -- common/autotest_common.sh@1184 -- # local i=0 00:12:49.773 00:46:42 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:12:49.773 00:46:42 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:12:49.773 00:46:42 -- common/autotest_common.sh@1191 -- # sleep 2 00:12:51.681 00:46:44 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:12:51.681 00:46:44 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:12:51.681 00:46:44 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:12:51.681 00:46:44 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:12:51.681 00:46:44 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:12:51.681 00:46:44 -- common/autotest_common.sh@1194 -- # return 0 00:12:51.681 00:46:44 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:51.942 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:51.942 00:46:44 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:51.942 00:46:44 -- common/autotest_common.sh@1205 -- # local i=0 00:12:51.942 00:46:44 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:12:51.942 00:46:44 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.942 00:46:44 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:12:51.942 00:46:44 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:51.942 00:46:44 -- common/autotest_common.sh@1217 -- # return 0 00:12:51.942 00:46:44 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@99 -- # seq 1 5 00:12:51.942 00:46:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.942 00:46:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 [2024-04-27 00:46:44.570669] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:51.942 00:46:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 [2024-04-27 00:46:44.618626] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:51.942 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:51.942 00:46:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:51.942 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:51.942 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.202 00:46:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 [2024-04-27 00:46:44.666693] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.202 00:46:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 [2024-04-27 00:46:44.714727] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.202 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.202 00:46:44 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.202 00:46:44 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.202 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.202 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 [2024-04-27 00:46:44.762791] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.203 00:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:12:52.203 00:46:44 -- common/autotest_common.sh@10 -- # set +x 00:12:52.203 00:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:12:52.203 00:46:44 -- target/rpc.sh@110 -- # stats='{ 00:12:52.203 "tick_rate": 1900000000, 00:12:52.203 "poll_groups": [ 00:12:52.203 { 00:12:52.203 "name": "nvmf_tgt_poll_group_0", 00:12:52.203 "admin_qpairs": 0, 00:12:52.203 "io_qpairs": 224, 00:12:52.203 "current_admin_qpairs": 0, 00:12:52.203 "current_io_qpairs": 0, 00:12:52.203 "pending_bdev_io": 0, 00:12:52.203 "completed_nvme_io": 228, 00:12:52.203 "transports": [ 00:12:52.203 { 00:12:52.203 "trtype": "TCP" 00:12:52.203 } 00:12:52.203 ] 00:12:52.203 }, 00:12:52.203 { 00:12:52.203 "name": "nvmf_tgt_poll_group_1", 00:12:52.203 "admin_qpairs": 1, 00:12:52.203 "io_qpairs": 223, 00:12:52.203 "current_admin_qpairs": 0, 00:12:52.203 "current_io_qpairs": 0, 00:12:52.203 "pending_bdev_io": 0, 00:12:52.203 "completed_nvme_io": 223, 00:12:52.203 "transports": [ 00:12:52.203 { 00:12:52.203 "trtype": "TCP" 00:12:52.203 } 00:12:52.203 ] 00:12:52.203 }, 00:12:52.203 { 00:12:52.203 "name": "nvmf_tgt_poll_group_2", 00:12:52.203 "admin_qpairs": 6, 00:12:52.203 "io_qpairs": 218, 00:12:52.203 "current_admin_qpairs": 0, 00:12:52.203 "current_io_qpairs": 0, 00:12:52.203 "pending_bdev_io": 0, 00:12:52.203 "completed_nvme_io": 269, 00:12:52.203 "transports": [ 00:12:52.203 { 00:12:52.203 "trtype": "TCP" 00:12:52.203 } 00:12:52.203 ] 00:12:52.203 }, 00:12:52.203 { 00:12:52.203 "name": "nvmf_tgt_poll_group_3", 00:12:52.203 "admin_qpairs": 0, 00:12:52.203 "io_qpairs": 224, 00:12:52.203 "current_admin_qpairs": 0, 00:12:52.203 "current_io_qpairs": 0, 00:12:52.203 "pending_bdev_io": 0, 00:12:52.203 "completed_nvme_io": 519, 00:12:52.203 "transports": [ 00:12:52.203 { 00:12:52.203 "trtype": "TCP" 00:12:52.203 } 00:12:52.203 ] 00:12:52.203 } 00:12:52.203 ] 00:12:52.203 }' 00:12:52.203 00:46:44 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.203 00:46:44 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.203 00:46:44 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.203 00:46:44 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.203 00:46:44 -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:12:52.203 00:46:44 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.203 00:46:44 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.203 00:46:44 -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.203 00:46:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:12:52.203 00:46:44 -- nvmf/common.sh@117 -- # sync 00:12:52.462 00:46:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.462 00:46:44 -- nvmf/common.sh@120 -- # set +e 00:12:52.462 00:46:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.462 00:46:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.462 rmmod nvme_tcp 00:12:52.462 rmmod nvme_fabrics 00:12:52.462 rmmod nvme_keyring 00:12:52.462 00:46:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.462 00:46:44 -- nvmf/common.sh@124 -- # set -e 00:12:52.462 00:46:44 -- nvmf/common.sh@125 -- # return 0 00:12:52.462 00:46:44 -- nvmf/common.sh@478 -- # '[' -n 2659984 ']' 00:12:52.462 00:46:44 -- nvmf/common.sh@479 -- # killprocess 2659984 00:12:52.462 00:46:44 -- common/autotest_common.sh@936 -- # '[' -z 2659984 ']' 00:12:52.462 00:46:44 -- common/autotest_common.sh@940 -- # kill -0 2659984 00:12:52.462 00:46:44 -- common/autotest_common.sh@941 -- # uname 00:12:52.462 00:46:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:12:52.462 00:46:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2659984 00:12:52.462 00:46:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:12:52.462 00:46:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:12:52.462 00:46:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2659984' 00:12:52.462 killing process with pid 2659984 00:12:52.462 00:46:45 -- common/autotest_common.sh@955 -- # kill 2659984 00:12:52.462 00:46:45 -- common/autotest_common.sh@960 -- # wait 2659984 00:12:53.029 00:46:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:12:53.029 00:46:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:12:53.029 00:46:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:12:53.029 00:46:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.029 00:46:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.029 00:46:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.029 00:46:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:53.029 00:46:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:54.938 00:46:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:54.938 00:12:54.938 real 0m36.085s 00:12:54.938 user 1m51.437s 00:12:54.938 sys 0m5.942s 00:12:54.938 00:46:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:12:54.938 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:12:54.938 ************************************ 00:12:54.938 END TEST nvmf_rpc 00:12:54.938 ************************************ 00:12:55.197 00:46:47 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.198 00:46:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:12:55.198 00:46:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:12:55.198 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:12:55.198 ************************************ 00:12:55.198 START TEST nvmf_invalid 00:12:55.198 ************************************ 00:12:55.198 00:46:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.198 * Looking for test storage... 00:12:55.198 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:12:55.198 00:46:47 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.198 00:46:47 -- nvmf/common.sh@7 -- # uname -s 00:12:55.198 00:46:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.198 00:46:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.198 00:46:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.198 00:46:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.198 00:46:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.198 00:46:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.198 00:46:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.198 00:46:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.198 00:46:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.198 00:46:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.198 00:46:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:12:55.198 00:46:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:12:55.198 00:46:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.198 00:46:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.198 00:46:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:12:55.198 00:46:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.198 00:46:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:12:55.198 00:46:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.198 00:46:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.198 00:46:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.198 00:46:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 00:46:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 00:46:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 00:46:47 -- paths/export.sh@5 -- # export PATH 00:12:55.198 00:46:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.198 00:46:47 -- nvmf/common.sh@47 -- # : 0 00:12:55.198 00:46:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.198 00:46:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.198 00:46:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.198 00:46:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.198 00:46:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.198 00:46:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.198 00:46:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.198 00:46:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.198 00:46:47 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.198 00:46:47 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:12:55.198 00:46:47 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:55.198 00:46:47 -- target/invalid.sh@14 -- # target=foobar 00:12:55.198 00:46:47 -- target/invalid.sh@16 -- # RANDOM=0 00:12:55.198 00:46:47 -- target/invalid.sh@34 -- # nvmftestinit 00:12:55.198 00:46:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:12:55.198 00:46:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.198 00:46:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:12:55.198 00:46:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:12:55.198 00:46:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:12:55.198 00:46:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.198 00:46:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.198 00:46:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.198 00:46:47 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:12:55.198 00:46:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:12:55.198 00:46:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.198 00:46:47 -- common/autotest_common.sh@10 -- # set +x 00:13:01.776 00:46:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:01.776 00:46:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.776 00:46:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.776 00:46:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.776 00:46:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.776 00:46:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.776 00:46:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.776 00:46:53 -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.776 00:46:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.776 00:46:53 -- nvmf/common.sh@296 -- # e810=() 00:13:01.776 00:46:53 -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.776 00:46:53 -- nvmf/common.sh@297 -- # x722=() 00:13:01.776 00:46:53 -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.776 00:46:53 -- nvmf/common.sh@298 -- # mlx=() 00:13:01.776 00:46:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.776 00:46:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.776 00:46:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.776 00:46:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.776 00:46:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:01.776 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:01.776 00:46:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.776 00:46:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:01.776 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:01.776 00:46:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.776 00:46:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.776 00:46:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.776 00:46:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:01.776 Found net devices under 0000:27:00.0: cvl_0_0 00:13:01.776 00:46:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.776 00:46:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.776 00:46:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.776 00:46:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.776 00:46:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:01.776 Found net devices under 0000:27:00.1: cvl_0_1 00:13:01.776 00:46:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.776 00:46:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:01.776 00:46:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:01.776 00:46:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.776 00:46:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.776 00:46:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.776 00:46:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.776 00:46:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.776 00:46:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.776 00:46:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.776 00:46:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.776 00:46:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.776 00:46:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.776 00:46:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.776 00:46:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.776 00:46:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.776 00:46:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.776 00:46:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.776 00:46:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.776 00:46:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.776 00:46:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.776 00:46:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.776 00:46:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:13:01.776 00:13:01.776 --- 10.0.0.2 ping statistics --- 00:13:01.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.776 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:13:01.776 00:46:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:13:01.776 00:13:01.776 --- 10.0.0.1 ping statistics --- 00:13:01.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.776 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:13:01.776 00:46:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.776 00:46:53 -- nvmf/common.sh@411 -- # return 0 00:13:01.776 00:46:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:01.776 00:46:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.776 00:46:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:01.776 00:46:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.776 00:46:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:01.776 00:46:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:01.776 00:46:53 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:01.776 00:46:53 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:01.776 00:46:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:01.776 00:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.776 00:46:53 -- nvmf/common.sh@470 -- # nvmfpid=2669422 00:13:01.776 00:46:53 -- nvmf/common.sh@471 -- # waitforlisten 2669422 00:13:01.777 00:46:53 -- common/autotest_common.sh@817 -- # '[' -z 2669422 ']' 00:13:01.777 00:46:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.777 00:46:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:01.777 00:46:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.777 00:46:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:01.777 00:46:53 -- common/autotest_common.sh@10 -- # set +x 00:13:01.777 00:46:53 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.777 [2024-04-27 00:46:53.646329] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:01.777 [2024-04-27 00:46:53.646446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.777 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.777 [2024-04-27 00:46:53.784019] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.777 [2024-04-27 00:46:53.889167] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.777 [2024-04-27 00:46:53.889205] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.777 [2024-04-27 00:46:53.889217] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.777 [2024-04-27 00:46:53.889236] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.777 [2024-04-27 00:46:53.889244] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.777 [2024-04-27 00:46:53.889370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.777 [2024-04-27 00:46:53.889387] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.777 [2024-04-27 00:46:53.889497] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.777 [2024-04-27 00:46:53.889508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.777 00:46:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:01.777 00:46:54 -- common/autotest_common.sh@850 -- # return 0 00:13:01.777 00:46:54 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:01.777 00:46:54 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:01.777 00:46:54 -- common/autotest_common.sh@10 -- # set +x 00:13:01.777 00:46:54 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:01.777 00:46:54 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:01.777 00:46:54 -- target/invalid.sh@40 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode21402 00:13:02.036 [2024-04-27 00:46:54.537956] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:02.036 00:46:54 -- target/invalid.sh@40 -- # out='request: 00:13:02.036 { 00:13:02.036 "nqn": "nqn.2016-06.io.spdk:cnode21402", 00:13:02.036 "tgt_name": "foobar", 00:13:02.036 "method": "nvmf_create_subsystem", 00:13:02.036 "req_id": 1 00:13:02.036 } 00:13:02.036 Got JSON-RPC error response 00:13:02.036 response: 00:13:02.036 { 00:13:02.036 "code": -32603, 00:13:02.036 "message": "Unable to find target foobar" 00:13:02.036 }' 00:13:02.036 00:46:54 -- target/invalid.sh@41 -- # [[ request: 00:13:02.036 { 00:13:02.036 "nqn": "nqn.2016-06.io.spdk:cnode21402", 00:13:02.036 "tgt_name": "foobar", 00:13:02.036 "method": "nvmf_create_subsystem", 00:13:02.036 "req_id": 1 00:13:02.036 } 00:13:02.036 Got JSON-RPC error response 00:13:02.036 response: 00:13:02.036 { 00:13:02.036 "code": -32603, 00:13:02.036 "message": "Unable to find target foobar" 00:13:02.036 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:02.036 00:46:54 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:02.036 00:46:54 -- target/invalid.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode9533 00:13:02.036 [2024-04-27 00:46:54.698213] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode9533: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:02.036 00:46:54 -- target/invalid.sh@45 -- # out='request: 00:13:02.036 { 00:13:02.036 "nqn": "nqn.2016-06.io.spdk:cnode9533", 00:13:02.036 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:02.036 "method": "nvmf_create_subsystem", 00:13:02.036 "req_id": 1 00:13:02.036 } 00:13:02.036 Got JSON-RPC error response 00:13:02.036 response: 00:13:02.036 { 00:13:02.036 "code": -32602, 00:13:02.036 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:02.036 }' 00:13:02.036 00:46:54 -- target/invalid.sh@46 -- # [[ request: 00:13:02.036 { 00:13:02.036 "nqn": "nqn.2016-06.io.spdk:cnode9533", 00:13:02.036 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:02.036 "method": "nvmf_create_subsystem", 00:13:02.036 "req_id": 1 00:13:02.036 } 00:13:02.036 Got JSON-RPC error response 00:13:02.036 response: 00:13:02.036 { 00:13:02.036 "code": -32602, 00:13:02.036 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:02.036 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.036 00:46:54 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:02.037 00:46:54 -- target/invalid.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31200 00:13:02.295 [2024-04-27 00:46:54.842350] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31200: invalid model number 'SPDK_Controller' 00:13:02.295 00:46:54 -- target/invalid.sh@50 -- # out='request: 00:13:02.295 { 00:13:02.295 "nqn": "nqn.2016-06.io.spdk:cnode31200", 00:13:02.295 "model_number": "SPDK_Controller\u001f", 00:13:02.295 "method": "nvmf_create_subsystem", 00:13:02.295 "req_id": 1 00:13:02.295 } 00:13:02.295 Got JSON-RPC error response 00:13:02.295 response: 00:13:02.295 { 00:13:02.295 "code": -32602, 00:13:02.295 "message": "Invalid MN SPDK_Controller\u001f" 00:13:02.295 }' 00:13:02.295 00:46:54 -- target/invalid.sh@51 -- # [[ request: 00:13:02.295 { 00:13:02.295 "nqn": "nqn.2016-06.io.spdk:cnode31200", 00:13:02.295 "model_number": "SPDK_Controller\u001f", 00:13:02.295 "method": "nvmf_create_subsystem", 00:13:02.295 "req_id": 1 00:13:02.295 } 00:13:02.295 Got JSON-RPC error response 00:13:02.295 response: 00:13:02.295 { 00:13:02.295 "code": -32602, 00:13:02.295 "message": "Invalid MN SPDK_Controller\u001f" 00:13:02.295 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.295 00:46:54 -- target/invalid.sh@54 -- # gen_random_s 21 00:13:02.295 00:46:54 -- target/invalid.sh@19 -- # local length=21 ll 00:13:02.296 00:46:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.296 00:46:54 -- target/invalid.sh@21 -- # local chars 00:13:02.296 00:46:54 -- target/invalid.sh@22 -- # local string 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 69 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=E 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 94 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+='^' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 45 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=- 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 108 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=l 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 52 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x34' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=4 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 111 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=o 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 83 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=S 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 119 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=w 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 38 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+='&' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 112 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=p 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 121 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=y 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 68 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=D 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 97 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=a 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 73 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=I 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 35 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+='#' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 103 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=g 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 72 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=H 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 127 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=$'\177' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 34 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+='"' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 107 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+=k 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # printf %x 94 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:02.296 00:46:54 -- target/invalid.sh@25 -- # string+='^' 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.296 00:46:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.296 00:46:54 -- target/invalid.sh@28 -- # [[ E == \- ]] 00:13:02.296 00:46:54 -- target/invalid.sh@31 -- # echo 'E^-l4oSw&pyDaI#gH"k^' 00:13:02.296 00:46:54 -- target/invalid.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'E^-l4oSw&pyDaI#gH"k^' nqn.2016-06.io.spdk:cnode7932 00:13:02.556 [2024-04-27 00:46:55.086659] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7932: invalid serial number 'E^-l4oSw&pyDaI#gH"k^' 00:13:02.556 00:46:55 -- target/invalid.sh@54 -- # out='request: 00:13:02.556 { 00:13:02.556 "nqn": "nqn.2016-06.io.spdk:cnode7932", 00:13:02.556 "serial_number": "E^-l4oSw&pyDaI#gH\u007f\"k^", 00:13:02.556 "method": "nvmf_create_subsystem", 00:13:02.556 "req_id": 1 00:13:02.556 } 00:13:02.556 Got JSON-RPC error response 00:13:02.556 response: 00:13:02.556 { 00:13:02.556 "code": -32602, 00:13:02.556 "message": "Invalid SN E^-l4oSw&pyDaI#gH\u007f\"k^" 00:13:02.556 }' 00:13:02.556 00:46:55 -- target/invalid.sh@55 -- # [[ request: 00:13:02.556 { 00:13:02.556 "nqn": "nqn.2016-06.io.spdk:cnode7932", 00:13:02.556 "serial_number": "E^-l4oSw&pyDaI#gH\u007f\"k^", 00:13:02.556 "method": "nvmf_create_subsystem", 00:13:02.556 "req_id": 1 00:13:02.556 } 00:13:02.556 Got JSON-RPC error response 00:13:02.556 response: 00:13:02.556 { 00:13:02.556 "code": -32602, 00:13:02.556 "message": "Invalid SN E^-l4oSw&pyDaI#gH\u007f\"k^" 00:13:02.556 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:02.556 00:46:55 -- target/invalid.sh@58 -- # gen_random_s 41 00:13:02.556 00:46:55 -- target/invalid.sh@19 -- # local length=41 ll 00:13:02.556 00:46:55 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:02.556 00:46:55 -- target/invalid.sh@21 -- # local chars 00:13:02.556 00:46:55 -- target/invalid.sh@22 -- # local string 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # printf %x 88 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # string+=X 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # printf %x 70 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # string+=F 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # printf %x 103 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # string+=g 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.556 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.556 00:46:55 -- target/invalid.sh@25 -- # printf %x 43 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x2b' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=+ 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 45 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x2d' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=- 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 40 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+='(' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 42 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+='*' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 71 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x47' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=G 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 117 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=u 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 32 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=' ' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 44 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=, 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 53 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=5 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 38 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+='&' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 48 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=0 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 72 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x48' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=H 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 74 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x4a' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=J 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 118 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=v 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 78 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=N 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 34 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+='"' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 79 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x4f' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=O 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 78 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=N 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 104 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x68' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=h 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 113 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=q 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 69 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=E 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 50 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=2 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 97 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+=a 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # printf %x 91 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:02.557 00:46:55 -- target/invalid.sh@25 -- # string+='[' 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.557 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 116 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=t 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 91 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+='[' 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 113 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=q 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 40 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+='(' 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 82 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=R 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 115 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=s 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 60 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+='<' 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 62 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x3e' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+='>' 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 107 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=k 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 87 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=W 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 40 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+='(' 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 66 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=B 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.818 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # printf %x 87 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:02.818 00:46:55 -- target/invalid.sh@25 -- # string+=W 00:13:02.819 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.819 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.819 00:46:55 -- target/invalid.sh@25 -- # printf %x 110 00:13:02.819 00:46:55 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:02.819 00:46:55 -- target/invalid.sh@25 -- # string+=n 00:13:02.819 00:46:55 -- target/invalid.sh@24 -- # (( ll++ )) 00:13:02.819 00:46:55 -- target/invalid.sh@24 -- # (( ll < length )) 00:13:02.819 00:46:55 -- target/invalid.sh@28 -- # [[ X == \- ]] 00:13:02.819 00:46:55 -- target/invalid.sh@31 -- # echo 'XFg+-(*Gu ,5&0HJvN"ONhqE2a[t[q(Rs<>kW(BWn' 00:13:02.819 00:46:55 -- target/invalid.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'XFg+-(*Gu ,5&0HJvN"ONhqE2a[t[q(Rs<>kW(BWn' nqn.2016-06.io.spdk:cnode5754 00:13:02.819 [2024-04-27 00:46:55.471149] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5754: invalid model number 'XFg+-(*Gu ,5&0HJvN"ONhqE2a[t[q(Rs<>kW(BWn' 00:13:02.819 00:46:55 -- target/invalid.sh@58 -- # out='request: 00:13:02.819 { 00:13:02.819 "nqn": "nqn.2016-06.io.spdk:cnode5754", 00:13:02.819 "model_number": "XFg+-(*Gu ,5&0HJvN\"ONhqE2a[t[q(Rs<>kW(BWn", 00:13:02.819 "method": "nvmf_create_subsystem", 00:13:02.819 "req_id": 1 00:13:02.819 } 00:13:02.819 Got JSON-RPC error response 00:13:02.819 response: 00:13:02.819 { 00:13:02.819 "code": -32602, 00:13:02.819 "message": "Invalid MN XFg+-(*Gu ,5&0HJvN\"ONhqE2a[t[q(Rs<>kW(BWn" 00:13:02.819 }' 00:13:02.819 00:46:55 -- target/invalid.sh@59 -- # [[ request: 00:13:02.819 { 00:13:02.819 "nqn": "nqn.2016-06.io.spdk:cnode5754", 00:13:02.819 "model_number": "XFg+-(*Gu ,5&0HJvN\"ONhqE2a[t[q(Rs<>kW(BWn", 00:13:02.819 "method": "nvmf_create_subsystem", 00:13:02.819 "req_id": 1 00:13:02.819 } 00:13:02.819 Got JSON-RPC error response 00:13:02.819 response: 00:13:02.819 { 00:13:02.819 "code": -32602, 00:13:02.819 "message": "Invalid MN XFg+-(*Gu ,5&0HJvN\"ONhqE2a[t[q(Rs<>kW(BWn" 00:13:02.819 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:02.819 00:46:55 -- target/invalid.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:03.080 [2024-04-27 00:46:55.627408] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.080 00:46:55 -- target/invalid.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:03.338 00:46:55 -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:03.338 00:46:55 -- target/invalid.sh@67 -- # echo '' 00:13:03.338 00:46:55 -- target/invalid.sh@67 -- # head -n 1 00:13:03.338 00:46:55 -- target/invalid.sh@67 -- # IP= 00:13:03.338 00:46:55 -- target/invalid.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:03.338 [2024-04-27 00:46:55.959875] nvmf_rpc.c: 792:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:03.338 00:46:55 -- target/invalid.sh@69 -- # out='request: 00:13:03.338 { 00:13:03.339 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.339 "listen_address": { 00:13:03.339 "trtype": "tcp", 00:13:03.339 "traddr": "", 00:13:03.339 "trsvcid": "4421" 00:13:03.339 }, 00:13:03.339 "method": "nvmf_subsystem_remove_listener", 00:13:03.339 "req_id": 1 00:13:03.339 } 00:13:03.339 Got JSON-RPC error response 00:13:03.339 response: 00:13:03.339 { 00:13:03.339 "code": -32602, 00:13:03.339 "message": "Invalid parameters" 00:13:03.339 }' 00:13:03.339 00:46:55 -- target/invalid.sh@70 -- # [[ request: 00:13:03.339 { 00:13:03.339 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:03.339 "listen_address": { 00:13:03.339 "trtype": "tcp", 00:13:03.339 "traddr": "", 00:13:03.339 "trsvcid": "4421" 00:13:03.339 }, 00:13:03.339 "method": "nvmf_subsystem_remove_listener", 00:13:03.339 "req_id": 1 00:13:03.339 } 00:13:03.339 Got JSON-RPC error response 00:13:03.339 response: 00:13:03.339 { 00:13:03.339 "code": -32602, 00:13:03.339 "message": "Invalid parameters" 00:13:03.339 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:03.339 00:46:55 -- target/invalid.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13990 -i 0 00:13:03.597 [2024-04-27 00:46:56.120008] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13990: invalid cntlid range [0-65519] 00:13:03.597 00:46:56 -- target/invalid.sh@73 -- # out='request: 00:13:03.597 { 00:13:03.597 "nqn": "nqn.2016-06.io.spdk:cnode13990", 00:13:03.597 "min_cntlid": 0, 00:13:03.597 "method": "nvmf_create_subsystem", 00:13:03.597 "req_id": 1 00:13:03.597 } 00:13:03.597 Got JSON-RPC error response 00:13:03.597 response: 00:13:03.597 { 00:13:03.597 "code": -32602, 00:13:03.597 "message": "Invalid cntlid range [0-65519]" 00:13:03.597 }' 00:13:03.597 00:46:56 -- target/invalid.sh@74 -- # [[ request: 00:13:03.597 { 00:13:03.597 "nqn": "nqn.2016-06.io.spdk:cnode13990", 00:13:03.597 "min_cntlid": 0, 00:13:03.597 "method": "nvmf_create_subsystem", 00:13:03.597 "req_id": 1 00:13:03.597 } 00:13:03.597 Got JSON-RPC error response 00:13:03.597 response: 00:13:03.597 { 00:13:03.597 "code": -32602, 00:13:03.597 "message": "Invalid cntlid range [0-65519]" 00:13:03.597 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.597 00:46:56 -- target/invalid.sh@75 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13784 -i 65520 00:13:03.597 [2024-04-27 00:46:56.276153] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13784: invalid cntlid range [65520-65519] 00:13:03.856 00:46:56 -- target/invalid.sh@75 -- # out='request: 00:13:03.856 { 00:13:03.856 "nqn": "nqn.2016-06.io.spdk:cnode13784", 00:13:03.856 "min_cntlid": 65520, 00:13:03.856 "method": "nvmf_create_subsystem", 00:13:03.856 "req_id": 1 00:13:03.856 } 00:13:03.856 Got JSON-RPC error response 00:13:03.856 response: 00:13:03.856 { 00:13:03.856 "code": -32602, 00:13:03.856 "message": "Invalid cntlid range [65520-65519]" 00:13:03.856 }' 00:13:03.856 00:46:56 -- target/invalid.sh@76 -- # [[ request: 00:13:03.856 { 00:13:03.856 "nqn": "nqn.2016-06.io.spdk:cnode13784", 00:13:03.856 "min_cntlid": 65520, 00:13:03.856 "method": "nvmf_create_subsystem", 00:13:03.856 "req_id": 1 00:13:03.856 } 00:13:03.856 Got JSON-RPC error response 00:13:03.856 response: 00:13:03.856 { 00:13:03.856 "code": -32602, 00:13:03.856 "message": "Invalid cntlid range [65520-65519]" 00:13:03.856 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.856 00:46:56 -- target/invalid.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode12855 -I 0 00:13:03.856 [2024-04-27 00:46:56.416326] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12855: invalid cntlid range [1-0] 00:13:03.856 00:46:56 -- target/invalid.sh@77 -- # out='request: 00:13:03.856 { 00:13:03.856 "nqn": "nqn.2016-06.io.spdk:cnode12855", 00:13:03.856 "max_cntlid": 0, 00:13:03.856 "method": "nvmf_create_subsystem", 00:13:03.856 "req_id": 1 00:13:03.856 } 00:13:03.856 Got JSON-RPC error response 00:13:03.856 response: 00:13:03.856 { 00:13:03.856 "code": -32602, 00:13:03.856 "message": "Invalid cntlid range [1-0]" 00:13:03.856 }' 00:13:03.856 00:46:56 -- target/invalid.sh@78 -- # [[ request: 00:13:03.856 { 00:13:03.856 "nqn": "nqn.2016-06.io.spdk:cnode12855", 00:13:03.856 "max_cntlid": 0, 00:13:03.856 "method": "nvmf_create_subsystem", 00:13:03.856 "req_id": 1 00:13:03.856 } 00:13:03.856 Got JSON-RPC error response 00:13:03.856 response: 00:13:03.856 { 00:13:03.856 "code": -32602, 00:13:03.856 "message": "Invalid cntlid range [1-0]" 00:13:03.856 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:03.856 00:46:56 -- target/invalid.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14774 -I 65520 00:13:04.116 [2024-04-27 00:46:56.556504] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14774: invalid cntlid range [1-65520] 00:13:04.116 00:46:56 -- target/invalid.sh@79 -- # out='request: 00:13:04.116 { 00:13:04.116 "nqn": "nqn.2016-06.io.spdk:cnode14774", 00:13:04.116 "max_cntlid": 65520, 00:13:04.116 "method": "nvmf_create_subsystem", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "Invalid cntlid range [1-65520]" 00:13:04.116 }' 00:13:04.116 00:46:56 -- target/invalid.sh@80 -- # [[ request: 00:13:04.116 { 00:13:04.116 "nqn": "nqn.2016-06.io.spdk:cnode14774", 00:13:04.116 "max_cntlid": 65520, 00:13:04.116 "method": "nvmf_create_subsystem", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "Invalid cntlid range [1-65520]" 00:13:04.116 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.116 00:46:56 -- target/invalid.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24911 -i 6 -I 5 00:13:04.116 [2024-04-27 00:46:56.696657] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24911: invalid cntlid range [6-5] 00:13:04.116 00:46:56 -- target/invalid.sh@83 -- # out='request: 00:13:04.116 { 00:13:04.116 "nqn": "nqn.2016-06.io.spdk:cnode24911", 00:13:04.116 "min_cntlid": 6, 00:13:04.116 "max_cntlid": 5, 00:13:04.116 "method": "nvmf_create_subsystem", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "Invalid cntlid range [6-5]" 00:13:04.116 }' 00:13:04.116 00:46:56 -- target/invalid.sh@84 -- # [[ request: 00:13:04.116 { 00:13:04.116 "nqn": "nqn.2016-06.io.spdk:cnode24911", 00:13:04.116 "min_cntlid": 6, 00:13:04.116 "max_cntlid": 5, 00:13:04.116 "method": "nvmf_create_subsystem", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "Invalid cntlid range [6-5]" 00:13:04.116 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:04.116 00:46:56 -- target/invalid.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:04.116 00:46:56 -- target/invalid.sh@87 -- # out='request: 00:13:04.116 { 00:13:04.116 "name": "foobar", 00:13:04.116 "method": "nvmf_delete_target", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:04.116 }' 00:13:04.116 00:46:56 -- target/invalid.sh@88 -- # [[ request: 00:13:04.116 { 00:13:04.116 "name": "foobar", 00:13:04.116 "method": "nvmf_delete_target", 00:13:04.116 "req_id": 1 00:13:04.116 } 00:13:04.116 Got JSON-RPC error response 00:13:04.116 response: 00:13:04.116 { 00:13:04.116 "code": -32602, 00:13:04.116 "message": "The specified target doesn't exist, cannot delete it." 00:13:04.116 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:04.116 00:46:56 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:04.116 00:46:56 -- target/invalid.sh@91 -- # nvmftestfini 00:13:04.116 00:46:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:04.116 00:46:56 -- nvmf/common.sh@117 -- # sync 00:13:04.116 00:46:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:04.116 00:46:56 -- nvmf/common.sh@120 -- # set +e 00:13:04.116 00:46:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:04.116 00:46:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:04.116 rmmod nvme_tcp 00:13:04.377 rmmod nvme_fabrics 00:13:04.377 rmmod nvme_keyring 00:13:04.377 00:46:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:04.377 00:46:56 -- nvmf/common.sh@124 -- # set -e 00:13:04.377 00:46:56 -- nvmf/common.sh@125 -- # return 0 00:13:04.377 00:46:56 -- nvmf/common.sh@478 -- # '[' -n 2669422 ']' 00:13:04.377 00:46:56 -- nvmf/common.sh@479 -- # killprocess 2669422 00:13:04.377 00:46:56 -- common/autotest_common.sh@936 -- # '[' -z 2669422 ']' 00:13:04.377 00:46:56 -- common/autotest_common.sh@940 -- # kill -0 2669422 00:13:04.377 00:46:56 -- common/autotest_common.sh@941 -- # uname 00:13:04.377 00:46:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:04.377 00:46:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2669422 00:13:04.377 00:46:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:13:04.377 00:46:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:13:04.377 00:46:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2669422' 00:13:04.377 killing process with pid 2669422 00:13:04.377 00:46:56 -- common/autotest_common.sh@955 -- # kill 2669422 00:13:04.377 00:46:56 -- common/autotest_common.sh@960 -- # wait 2669422 00:13:04.987 00:46:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:04.987 00:46:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:04.987 00:46:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:04.987 00:46:57 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:04.987 00:46:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:04.987 00:46:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.987 00:46:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:04.987 00:46:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:06.894 00:46:59 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:06.894 00:13:06.894 real 0m11.689s 00:13:06.894 user 0m16.906s 00:13:06.894 sys 0m5.235s 00:13:06.894 00:46:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:06.894 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.894 ************************************ 00:13:06.894 END TEST nvmf_invalid 00:13:06.894 ************************************ 00:13:06.894 00:46:59 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:06.894 00:46:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:06.894 00:46:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:06.894 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:06.894 ************************************ 00:13:06.894 START TEST nvmf_abort 00:13:06.894 ************************************ 00:13:06.894 00:46:59 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:07.152 * Looking for test storage... 00:13:07.152 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:07.152 00:46:59 -- target/abort.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:07.152 00:46:59 -- nvmf/common.sh@7 -- # uname -s 00:13:07.152 00:46:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:07.152 00:46:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:07.152 00:46:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:07.152 00:46:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:07.152 00:46:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:07.152 00:46:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:07.152 00:46:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:07.153 00:46:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:07.153 00:46:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:07.153 00:46:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:07.153 00:46:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:13:07.153 00:46:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:13:07.153 00:46:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:07.153 00:46:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:07.153 00:46:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:07.153 00:46:59 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:07.153 00:46:59 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:07.153 00:46:59 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:07.153 00:46:59 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:07.153 00:46:59 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:07.153 00:46:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.153 00:46:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.153 00:46:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.153 00:46:59 -- paths/export.sh@5 -- # export PATH 00:13:07.153 00:46:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:07.153 00:46:59 -- nvmf/common.sh@47 -- # : 0 00:13:07.153 00:46:59 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:07.153 00:46:59 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:07.153 00:46:59 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:07.153 00:46:59 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:07.153 00:46:59 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:07.153 00:46:59 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:07.153 00:46:59 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:07.153 00:46:59 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:07.153 00:46:59 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:07.153 00:46:59 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:07.153 00:46:59 -- target/abort.sh@14 -- # nvmftestinit 00:13:07.153 00:46:59 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:07.153 00:46:59 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:07.153 00:46:59 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:07.153 00:46:59 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:07.153 00:46:59 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:07.153 00:46:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:07.153 00:46:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:07.153 00:46:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:07.153 00:46:59 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:13:07.153 00:46:59 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:07.153 00:46:59 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:07.153 00:46:59 -- common/autotest_common.sh@10 -- # set +x 00:13:12.430 00:47:05 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:12.430 00:47:05 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:12.430 00:47:05 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:12.430 00:47:05 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:12.430 00:47:05 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:12.430 00:47:05 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:12.430 00:47:05 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:12.430 00:47:05 -- nvmf/common.sh@295 -- # net_devs=() 00:13:12.430 00:47:05 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:12.430 00:47:05 -- nvmf/common.sh@296 -- # e810=() 00:13:12.430 00:47:05 -- nvmf/common.sh@296 -- # local -ga e810 00:13:12.430 00:47:05 -- nvmf/common.sh@297 -- # x722=() 00:13:12.430 00:47:05 -- nvmf/common.sh@297 -- # local -ga x722 00:13:12.430 00:47:05 -- nvmf/common.sh@298 -- # mlx=() 00:13:12.430 00:47:05 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:12.430 00:47:05 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:12.430 00:47:05 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:12.430 00:47:05 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:12.430 00:47:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.430 00:47:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:12.430 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:12.430 00:47:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.430 00:47:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:12.431 00:47:05 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:12.431 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:12.431 00:47:05 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:12.431 00:47:05 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.431 00:47:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.431 00:47:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.431 00:47:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.431 00:47:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:12.431 Found net devices under 0000:27:00.0: cvl_0_0 00:13:12.431 00:47:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.431 00:47:05 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:12.431 00:47:05 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:12.431 00:47:05 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:12.431 00:47:05 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:12.431 00:47:05 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:12.431 Found net devices under 0000:27:00.1: cvl_0_1 00:13:12.431 00:47:05 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:12.431 00:47:05 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:12.431 00:47:05 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:12.431 00:47:05 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:12.431 00:47:05 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:12.431 00:47:05 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:12.431 00:47:05 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:12.431 00:47:05 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:12.431 00:47:05 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:12.431 00:47:05 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:12.431 00:47:05 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:12.431 00:47:05 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:12.431 00:47:05 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:12.431 00:47:05 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:12.431 00:47:05 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:12.431 00:47:05 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:12.431 00:47:05 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:12.691 00:47:05 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:12.691 00:47:05 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:12.691 00:47:05 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:12.691 00:47:05 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:12.691 00:47:05 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:12.691 00:47:05 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:12.691 00:47:05 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:12.691 00:47:05 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:12.691 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:12.691 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:13:12.691 00:13:12.691 --- 10.0.0.2 ping statistics --- 00:13:12.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.691 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:13:12.691 00:47:05 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:12.692 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:12.692 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:13:12.692 00:13:12.692 --- 10.0.0.1 ping statistics --- 00:13:12.692 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:12.692 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:13:12.692 00:47:05 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:12.692 00:47:05 -- nvmf/common.sh@411 -- # return 0 00:13:12.692 00:47:05 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:12.692 00:47:05 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:12.692 00:47:05 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:12.692 00:47:05 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:12.692 00:47:05 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:12.692 00:47:05 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:12.692 00:47:05 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:12.952 00:47:05 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:12.952 00:47:05 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:12.952 00:47:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:12.952 00:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:12.952 00:47:05 -- nvmf/common.sh@470 -- # nvmfpid=2674323 00:13:12.952 00:47:05 -- nvmf/common.sh@471 -- # waitforlisten 2674323 00:13:12.952 00:47:05 -- common/autotest_common.sh@817 -- # '[' -z 2674323 ']' 00:13:12.952 00:47:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.952 00:47:05 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:12.952 00:47:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:12.952 00:47:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.952 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.952 00:47:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:12.952 00:47:05 -- common/autotest_common.sh@10 -- # set +x 00:13:12.952 [2024-04-27 00:47:05.487739] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:12.952 [2024-04-27 00:47:05.487865] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.952 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.952 [2024-04-27 00:47:05.646572] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.213 [2024-04-27 00:47:05.804423] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.213 [2024-04-27 00:47:05.804494] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.213 [2024-04-27 00:47:05.804511] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.213 [2024-04-27 00:47:05.804528] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.213 [2024-04-27 00:47:05.804541] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.213 [2024-04-27 00:47:05.804655] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.213 [2024-04-27 00:47:05.804770] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.213 [2024-04-27 00:47:05.804779] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.780 00:47:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:13.780 00:47:06 -- common/autotest_common.sh@850 -- # return 0 00:13:13.780 00:47:06 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:13.780 00:47:06 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 00:47:06 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.780 00:47:06 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 [2024-04-27 00:47:06.243880] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 Malloc0 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 Delay0 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 [2024-04-27 00:47:06.352234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:13.780 00:47:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:13.780 00:47:06 -- common/autotest_common.sh@10 -- # set +x 00:13:13.780 00:47:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:13.780 00:47:06 -- target/abort.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:13.780 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.038 [2024-04-27 00:47:06.494184] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:16.580 Initializing NVMe Controllers 00:13:16.580 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:16.580 controller IO queue size 128 less than required 00:13:16.580 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:16.580 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:16.580 Initialization complete. Launching workers. 00:13:16.580 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 47266 00:13:16.580 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 47331, failed to submit 62 00:13:16.580 success 47270, unsuccess 61, failed 0 00:13:16.580 00:47:08 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:16.580 00:47:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:13:16.580 00:47:08 -- common/autotest_common.sh@10 -- # set +x 00:13:16.580 00:47:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:13:16.580 00:47:08 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:16.580 00:47:08 -- target/abort.sh@38 -- # nvmftestfini 00:13:16.580 00:47:08 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:16.580 00:47:08 -- nvmf/common.sh@117 -- # sync 00:13:16.580 00:47:08 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:16.580 00:47:08 -- nvmf/common.sh@120 -- # set +e 00:13:16.580 00:47:08 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:16.580 00:47:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:16.580 rmmod nvme_tcp 00:13:16.580 rmmod nvme_fabrics 00:13:16.580 rmmod nvme_keyring 00:13:16.580 00:47:08 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:16.580 00:47:08 -- nvmf/common.sh@124 -- # set -e 00:13:16.580 00:47:08 -- nvmf/common.sh@125 -- # return 0 00:13:16.580 00:47:08 -- nvmf/common.sh@478 -- # '[' -n 2674323 ']' 00:13:16.580 00:47:08 -- nvmf/common.sh@479 -- # killprocess 2674323 00:13:16.580 00:47:08 -- common/autotest_common.sh@936 -- # '[' -z 2674323 ']' 00:13:16.580 00:47:08 -- common/autotest_common.sh@940 -- # kill -0 2674323 00:13:16.580 00:47:08 -- common/autotest_common.sh@941 -- # uname 00:13:16.580 00:47:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:16.580 00:47:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2674323 00:13:16.580 00:47:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:16.580 00:47:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:16.580 00:47:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2674323' 00:13:16.580 killing process with pid 2674323 00:13:16.580 00:47:08 -- common/autotest_common.sh@955 -- # kill 2674323 00:13:16.580 00:47:08 -- common/autotest_common.sh@960 -- # wait 2674323 00:13:16.839 00:47:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:16.839 00:47:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:16.839 00:47:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:16.839 00:47:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:16.839 00:47:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:16.839 00:47:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:16.839 00:47:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:16.839 00:47:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.765 00:47:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.765 00:13:18.765 real 0m11.854s 00:13:18.765 user 0m14.230s 00:13:18.765 sys 0m4.930s 00:13:18.765 00:47:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:13:18.765 00:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:18.765 ************************************ 00:13:18.765 END TEST nvmf_abort 00:13:18.765 ************************************ 00:13:18.765 00:47:11 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:18.765 00:47:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:13:18.765 00:47:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:13:18.765 00:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:19.025 ************************************ 00:13:19.025 START TEST nvmf_ns_hotplug_stress 00:13:19.025 ************************************ 00:13:19.025 00:47:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:19.025 * Looking for test storage... 00:13:19.025 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:13:19.025 00:47:11 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:13:19.025 00:47:11 -- nvmf/common.sh@7 -- # uname -s 00:13:19.025 00:47:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:19.025 00:47:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:19.025 00:47:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:19.025 00:47:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:19.025 00:47:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:19.025 00:47:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:19.025 00:47:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:19.025 00:47:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:19.025 00:47:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:19.025 00:47:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:19.025 00:47:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:13:19.025 00:47:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:13:19.025 00:47:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:19.025 00:47:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:19.025 00:47:11 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:13:19.025 00:47:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:19.025 00:47:11 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:13:19.025 00:47:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:19.025 00:47:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:19.025 00:47:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:19.025 00:47:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.025 00:47:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.025 00:47:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.025 00:47:11 -- paths/export.sh@5 -- # export PATH 00:13:19.025 00:47:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:19.025 00:47:11 -- nvmf/common.sh@47 -- # : 0 00:13:19.025 00:47:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:19.025 00:47:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:19.025 00:47:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:19.025 00:47:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:19.025 00:47:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:19.025 00:47:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:19.025 00:47:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:19.025 00:47:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:19.025 00:47:11 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:13:19.025 00:47:11 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:13:19.025 00:47:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:13:19.025 00:47:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:19.025 00:47:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:13:19.025 00:47:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:13:19.025 00:47:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:13:19.025 00:47:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:19.025 00:47:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:19.025 00:47:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:19.026 00:47:11 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:13:19.026 00:47:11 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:13:19.026 00:47:11 -- nvmf/common.sh@285 -- # xtrace_disable 00:13:19.026 00:47:11 -- common/autotest_common.sh@10 -- # set +x 00:13:25.597 00:47:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:13:25.597 00:47:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:13:25.597 00:47:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:25.597 00:47:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:25.597 00:47:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:25.597 00:47:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:25.597 00:47:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:25.597 00:47:17 -- nvmf/common.sh@295 -- # net_devs=() 00:13:25.597 00:47:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:25.597 00:47:17 -- nvmf/common.sh@296 -- # e810=() 00:13:25.597 00:47:17 -- nvmf/common.sh@296 -- # local -ga e810 00:13:25.597 00:47:17 -- nvmf/common.sh@297 -- # x722=() 00:13:25.597 00:47:17 -- nvmf/common.sh@297 -- # local -ga x722 00:13:25.597 00:47:17 -- nvmf/common.sh@298 -- # mlx=() 00:13:25.597 00:47:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:13:25.597 00:47:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:25.597 00:47:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:25.597 00:47:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:25.597 00:47:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.597 00:47:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:13:25.597 Found 0000:27:00.0 (0x8086 - 0x159b) 00:13:25.597 00:47:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:25.597 00:47:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:13:25.597 Found 0000:27:00.1 (0x8086 - 0x159b) 00:13:25.597 00:47:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:25.597 00:47:17 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:13:25.597 00:47:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.597 00:47:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.597 00:47:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.597 00:47:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.597 00:47:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:13:25.597 Found net devices under 0000:27:00.0: cvl_0_0 00:13:25.598 00:47:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.598 00:47:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:25.598 00:47:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:25.598 00:47:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:13:25.598 00:47:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:25.598 00:47:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:13:25.598 Found net devices under 0000:27:00.1: cvl_0_1 00:13:25.598 00:47:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:13:25.598 00:47:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:13:25.598 00:47:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:13:25.598 00:47:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:13:25.598 00:47:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:13:25.598 00:47:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:13:25.598 00:47:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:25.598 00:47:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:25.598 00:47:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:25.598 00:47:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:25.598 00:47:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:25.598 00:47:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:25.598 00:47:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:25.598 00:47:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:25.598 00:47:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:25.598 00:47:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:25.598 00:47:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:25.598 00:47:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:25.598 00:47:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:25.598 00:47:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:25.598 00:47:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:25.598 00:47:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:25.598 00:47:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:25.598 00:47:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:25.598 00:47:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:25.598 00:47:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:25.598 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:25.598 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:13:25.598 00:13:25.598 --- 10.0.0.2 ping statistics --- 00:13:25.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.598 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:13:25.598 00:47:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:25.598 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:25.598 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:13:25.598 00:13:25.598 --- 10.0.0.1 ping statistics --- 00:13:25.598 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:25.598 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:13:25.598 00:47:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:25.598 00:47:17 -- nvmf/common.sh@411 -- # return 0 00:13:25.598 00:47:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:13:25.598 00:47:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:25.598 00:47:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:13:25.598 00:47:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:13:25.598 00:47:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:25.598 00:47:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:13:25.598 00:47:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:13:25.598 00:47:17 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:13:25.598 00:47:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:13:25.598 00:47:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:13:25.598 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:13:25.598 00:47:17 -- nvmf/common.sh@470 -- # nvmfpid=2679090 00:13:25.598 00:47:17 -- nvmf/common.sh@471 -- # waitforlisten 2679090 00:13:25.598 00:47:17 -- common/autotest_common.sh@817 -- # '[' -z 2679090 ']' 00:13:25.598 00:47:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:25.598 00:47:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:13:25.598 00:47:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:25.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:25.598 00:47:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:13:25.598 00:47:17 -- common/autotest_common.sh@10 -- # set +x 00:13:25.598 00:47:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:25.598 [2024-04-27 00:47:17.342451] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:13:25.598 [2024-04-27 00:47:17.342552] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:25.598 EAL: No free 2048 kB hugepages reported on node 1 00:13:25.598 [2024-04-27 00:47:17.491397] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:25.598 [2024-04-27 00:47:17.645846] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:25.598 [2024-04-27 00:47:17.645899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:25.598 [2024-04-27 00:47:17.645914] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:25.598 [2024-04-27 00:47:17.645929] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:25.598 [2024-04-27 00:47:17.645942] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:25.598 [2024-04-27 00:47:17.646039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:13:25.598 [2024-04-27 00:47:17.646147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:13:25.598 [2024-04-27 00:47:17.646155] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:13:25.598 00:47:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:13:25.598 00:47:18 -- common/autotest_common.sh@850 -- # return 0 00:13:25.598 00:47:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:13:25.598 00:47:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:13:25.598 00:47:18 -- common/autotest_common.sh@10 -- # set +x 00:13:25.598 00:47:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:25.598 00:47:18 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:13:25.598 00:47:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:25.598 [2024-04-27 00:47:18.203483] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:25.598 00:47:18 -- target/ns_hotplug_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:25.859 00:47:18 -- target/ns_hotplug_stress.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.859 [2024-04-27 00:47:18.530317] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.859 00:47:18 -- target/ns_hotplug_stress.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:26.121 00:47:18 -- target/ns_hotplug_stress.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:26.380 Malloc0 00:13:26.380 00:47:18 -- target/ns_hotplug_stress.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:26.380 Delay0 00:13:26.380 00:47:19 -- target/ns_hotplug_stress.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:26.639 00:47:19 -- target/ns_hotplug_stress.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:26.639 NULL1 00:13:26.639 00:47:19 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:26.898 00:47:19 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:26.898 00:47:19 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=2679528 00:13:26.898 00:47:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:26.898 00:47:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:26.898 EAL: No free 2048 kB hugepages reported on node 1 00:13:27.159 00:47:19 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.159 00:47:19 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:13:27.159 00:47:19 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:27.418 [2024-04-27 00:47:19.898202] bdev.c:4971:_tmp_bdev_event_cb: *NOTICE*: Unexpected event type: 1 00:13:27.418 true 00:13:27.418 00:47:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:27.418 00:47:19 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.418 00:47:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.677 00:47:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:13:27.677 00:47:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:27.936 true 00:13:27.936 00:47:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:27.936 00:47:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.936 00:47:20 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.193 00:47:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:13:28.193 00:47:20 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:28.193 true 00:13:28.193 00:47:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:28.193 00:47:20 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.453 00:47:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.714 00:47:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:13:28.714 00:47:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:28.714 true 00:13:28.714 00:47:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:28.714 00:47:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:28.972 00:47:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:28.972 00:47:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:13:28.972 00:47:21 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:29.230 true 00:13:29.230 00:47:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:29.230 00:47:21 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.488 00:47:21 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.488 00:47:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:13:29.488 00:47:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:29.749 true 00:13:29.749 00:47:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:29.749 00:47:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.749 00:47:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.066 00:47:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:13:30.066 00:47:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:30.066 true 00:13:30.066 00:47:22 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:30.066 00:47:22 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.324 00:47:22 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.324 00:47:22 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:13:30.324 00:47:22 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:30.583 true 00:13:30.583 00:47:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:30.583 00:47:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:30.583 00:47:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:30.841 00:47:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:13:30.841 00:47:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:31.099 true 00:13:31.099 00:47:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:31.099 00:47:23 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.099 00:47:23 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.356 00:47:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:13:31.356 00:47:23 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:31.356 true 00:13:31.356 00:47:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:31.356 00:47:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.615 00:47:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.875 00:47:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:13:31.875 00:47:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:31.875 true 00:13:31.875 00:47:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:31.875 00:47:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.133 00:47:24 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.133 00:47:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:13:32.133 00:47:24 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:32.392 true 00:13:32.392 00:47:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:32.392 00:47:24 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.650 00:47:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.650 00:47:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:13:32.650 00:47:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:32.908 true 00:13:32.908 00:47:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:32.908 00:47:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:32.908 00:47:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.166 00:47:25 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:13:33.166 00:47:25 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:33.166 true 00:13:33.166 00:47:25 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:33.166 00:47:25 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.425 00:47:25 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.425 00:47:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:13:33.425 00:47:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:33.684 true 00:13:33.684 00:47:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:33.685 00:47:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.943 00:47:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.943 00:47:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:13:33.943 00:47:26 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:34.201 true 00:13:34.202 00:47:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:34.202 00:47:26 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.460 00:47:26 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.460 00:47:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:13:34.460 00:47:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:34.718 true 00:13:34.719 00:47:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:34.719 00:47:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:34.719 00:47:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.978 00:47:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:13:34.978 00:47:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:34.978 true 00:13:34.978 00:47:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:34.978 00:47:27 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.236 00:47:27 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.236 00:47:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:13:35.236 00:47:27 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:35.495 true 00:13:35.495 00:47:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:35.495 00:47:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.753 00:47:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.753 00:47:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:13:35.753 00:47:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:36.011 true 00:13:36.011 00:47:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:36.011 00:47:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.011 00:47:28 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.270 00:47:28 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:13:36.270 00:47:28 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:36.270 true 00:13:36.270 00:47:28 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:36.270 00:47:28 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.528 00:47:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.528 00:47:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:13:36.528 00:47:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:36.787 true 00:13:36.787 00:47:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:36.787 00:47:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.787 00:47:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.045 00:47:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:13:37.045 00:47:29 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:37.303 true 00:13:37.303 00:47:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:37.303 00:47:29 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.303 00:47:29 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.561 00:47:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:13:37.561 00:47:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:37.561 true 00:13:37.561 00:47:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:37.561 00:47:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.820 00:47:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.820 00:47:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:13:37.820 00:47:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:38.078 true 00:13:38.078 00:47:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:38.078 00:47:30 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.078 00:47:30 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.336 00:47:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:13:38.336 00:47:30 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:38.336 true 00:13:38.595 00:47:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:38.595 00:47:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.595 00:47:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.855 00:47:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:13:38.855 00:47:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:38.855 true 00:13:38.855 00:47:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:38.855 00:47:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.112 00:47:31 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.112 00:47:31 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:13:39.112 00:47:31 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:39.371 true 00:13:39.371 00:47:31 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:39.371 00:47:31 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.371 00:47:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.629 00:47:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:13:39.629 00:47:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:39.629 true 00:13:39.629 00:47:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:39.629 00:47:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.886 00:47:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.144 00:47:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:13:40.144 00:47:32 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:40.144 true 00:13:40.144 00:47:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:40.144 00:47:32 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.409 00:47:32 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.409 00:47:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1031 00:13:40.409 00:47:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:13:40.670 true 00:13:40.670 00:47:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:40.670 00:47:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.670 00:47:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.928 00:47:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1032 00:13:40.928 00:47:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:13:40.928 true 00:13:40.928 00:47:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:40.928 00:47:33 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.186 00:47:33 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.447 00:47:33 -- target/ns_hotplug_stress.sh@40 -- # null_size=1033 00:13:41.447 00:47:33 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:13:41.447 true 00:13:41.447 00:47:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:41.447 00:47:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.707 00:47:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.707 00:47:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1034 00:13:41.707 00:47:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:13:41.966 true 00:13:41.966 00:47:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:41.966 00:47:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.966 00:47:34 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.224 00:47:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1035 00:13:42.224 00:47:34 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:13:42.224 true 00:13:42.224 00:47:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:42.224 00:47:34 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.482 00:47:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.740 00:47:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1036 00:13:42.740 00:47:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:13:42.740 true 00:13:42.740 00:47:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:42.740 00:47:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.997 00:47:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.997 00:47:35 -- target/ns_hotplug_stress.sh@40 -- # null_size=1037 00:13:42.997 00:47:35 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:13:43.255 true 00:13:43.255 00:47:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:43.255 00:47:35 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.255 00:47:35 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.513 00:47:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1038 00:13:43.513 00:47:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:13:43.513 true 00:13:43.513 00:47:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:43.513 00:47:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.770 00:47:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.028 00:47:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1039 00:13:44.028 00:47:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:13:44.028 true 00:13:44.028 00:47:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:44.028 00:47:36 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.302 00:47:36 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.302 00:47:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1040 00:13:44.302 00:47:36 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:13:44.560 true 00:13:44.560 00:47:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:44.560 00:47:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.560 00:47:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.816 00:47:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1041 00:13:44.816 00:47:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:13:44.816 true 00:13:44.816 00:47:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:44.816 00:47:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.073 00:47:37 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.330 00:47:37 -- target/ns_hotplug_stress.sh@40 -- # null_size=1042 00:13:45.330 00:47:37 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:13:45.330 true 00:13:45.330 00:47:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:45.330 00:47:37 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.587 00:47:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:45.587 00:47:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1043 00:13:45.587 00:47:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:13:45.907 true 00:13:45.907 00:47:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:45.907 00:47:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.907 00:47:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.164 00:47:38 -- target/ns_hotplug_stress.sh@40 -- # null_size=1044 00:13:46.164 00:47:38 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:13:46.164 true 00:13:46.164 00:47:38 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:46.164 00:47:38 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.421 00:47:38 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.421 00:47:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1045 00:13:46.421 00:47:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:13:46.677 true 00:13:46.677 00:47:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:46.677 00:47:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:46.677 00:47:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.933 00:47:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1046 00:13:46.933 00:47:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:13:46.933 true 00:13:46.933 00:47:39 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:46.933 00:47:39 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.191 00:47:39 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.449 00:47:39 -- target/ns_hotplug_stress.sh@40 -- # null_size=1047 00:13:47.449 00:47:39 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:13:47.449 true 00:13:47.449 00:47:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:47.449 00:47:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.707 00:47:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.707 00:47:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1048 00:13:47.707 00:47:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:13:47.965 true 00:13:47.965 00:47:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:47.965 00:47:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.965 00:47:40 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.223 00:47:40 -- target/ns_hotplug_stress.sh@40 -- # null_size=1049 00:13:48.223 00:47:40 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:13:48.223 true 00:13:48.223 00:47:40 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:48.482 00:47:40 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.482 00:47:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.740 00:47:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1050 00:13:48.740 00:47:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:13:48.740 true 00:13:48.740 00:47:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:48.740 00:47:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:48.998 00:47:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:48.998 00:47:41 -- target/ns_hotplug_stress.sh@40 -- # null_size=1051 00:13:48.999 00:47:41 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:13:49.257 true 00:13:49.257 00:47:41 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:49.257 00:47:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.257 00:47:41 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:49.517 00:47:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1052 00:13:49.517 00:47:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1052 00:13:49.517 true 00:13:49.777 00:47:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:49.777 00:47:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:49.777 00:47:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.036 00:47:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1053 00:13:50.036 00:47:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1053 00:13:50.036 true 00:13:50.036 00:47:42 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:50.036 00:47:42 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.294 00:47:42 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.294 00:47:42 -- target/ns_hotplug_stress.sh@40 -- # null_size=1054 00:13:50.294 00:47:42 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1054 00:13:50.551 true 00:13:50.551 00:47:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:50.551 00:47:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.551 00:47:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:50.810 00:47:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1055 00:13:50.810 00:47:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1055 00:13:50.810 true 00:13:50.810 00:47:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:50.810 00:47:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.068 00:47:43 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.328 00:47:43 -- target/ns_hotplug_stress.sh@40 -- # null_size=1056 00:13:51.328 00:47:43 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1056 00:13:51.328 true 00:13:51.328 00:47:43 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:51.328 00:47:43 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.587 00:47:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:51.587 00:47:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1057 00:13:51.587 00:47:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1057 00:13:51.847 true 00:13:51.847 00:47:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:51.847 00:47:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.106 00:47:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.106 00:47:44 -- target/ns_hotplug_stress.sh@40 -- # null_size=1058 00:13:52.106 00:47:44 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1058 00:13:52.364 true 00:13:52.364 00:47:44 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:52.364 00:47:44 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.364 00:47:44 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.622 00:47:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1059 00:13:52.622 00:47:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1059 00:13:52.622 true 00:13:52.622 00:47:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:52.622 00:47:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.880 00:47:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:52.880 00:47:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1060 00:13:52.880 00:47:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1060 00:13:53.138 true 00:13:53.138 00:47:45 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:53.138 00:47:45 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.138 00:47:45 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.398 00:47:45 -- target/ns_hotplug_stress.sh@40 -- # null_size=1061 00:13:53.398 00:47:45 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1061 00:13:53.656 true 00:13:53.656 00:47:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:53.656 00:47:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.656 00:47:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:53.914 00:47:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1062 00:13:53.914 00:47:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1062 00:13:53.914 true 00:13:53.914 00:47:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:53.914 00:47:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.171 00:47:46 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.171 00:47:46 -- target/ns_hotplug_stress.sh@40 -- # null_size=1063 00:13:54.171 00:47:46 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1063 00:13:54.428 true 00:13:54.428 00:47:46 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:54.428 00:47:46 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.428 00:47:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:54.685 00:47:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1064 00:13:54.685 00:47:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1064 00:13:54.943 true 00:13:54.943 00:47:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:54.943 00:47:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.943 00:47:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.201 00:47:47 -- target/ns_hotplug_stress.sh@40 -- # null_size=1065 00:13:55.201 00:47:47 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1065 00:13:55.201 true 00:13:55.201 00:47:47 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:55.201 00:47:47 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.459 00:47:47 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.459 00:47:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1066 00:13:55.459 00:47:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1066 00:13:55.717 true 00:13:55.717 00:47:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:55.717 00:47:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.717 00:47:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:55.975 00:47:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1067 00:13:55.975 00:47:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1067 00:13:55.975 true 00:13:55.975 00:47:48 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:55.975 00:47:48 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.234 00:47:48 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.492 00:47:48 -- target/ns_hotplug_stress.sh@40 -- # null_size=1068 00:13:56.492 00:47:48 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1068 00:13:56.492 true 00:13:56.492 00:47:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:56.492 00:47:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:56.750 00:47:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:56.750 00:47:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1069 00:13:56.750 00:47:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1069 00:13:57.008 true 00:13:57.008 00:47:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:57.008 00:47:49 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:57.008 00:47:49 -- target/ns_hotplug_stress.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:57.008 Initializing NVMe Controllers 00:13:57.008 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:57.008 Controller IO queue size 128, less than required. 00:13:57.008 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:57.008 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:57.008 Initialization complete. Launching workers. 00:13:57.008 ======================================================== 00:13:57.008 Latency(us) 00:13:57.008 Device Information : IOPS MiB/s Average min max 00:13:57.008 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27547.47 13.45 4646.52 3341.03 8688.14 00:13:57.008 ======================================================== 00:13:57.008 Total : 27547.47 13.45 4646.52 3341.03 8688.14 00:13:57.008 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@40 -- # null_size=1070 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1070 00:13:57.266 true 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@35 -- # kill -0 2679528 00:13:57.266 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (2679528) - No such process 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@44 -- # wait 2679528 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:57.266 00:47:49 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:13:57.266 00:47:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:13:57.266 00:47:49 -- nvmf/common.sh@117 -- # sync 00:13:57.266 00:47:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:57.266 00:47:49 -- nvmf/common.sh@120 -- # set +e 00:13:57.266 00:47:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:57.266 00:47:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:57.266 rmmod nvme_tcp 00:13:57.266 rmmod nvme_fabrics 00:13:57.534 rmmod nvme_keyring 00:13:57.534 00:47:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:57.534 00:47:49 -- nvmf/common.sh@124 -- # set -e 00:13:57.534 00:47:49 -- nvmf/common.sh@125 -- # return 0 00:13:57.534 00:47:49 -- nvmf/common.sh@478 -- # '[' -n 2679090 ']' 00:13:57.535 00:47:49 -- nvmf/common.sh@479 -- # killprocess 2679090 00:13:57.535 00:47:49 -- common/autotest_common.sh@936 -- # '[' -z 2679090 ']' 00:13:57.535 00:47:49 -- common/autotest_common.sh@940 -- # kill -0 2679090 00:13:57.535 00:47:49 -- common/autotest_common.sh@941 -- # uname 00:13:57.535 00:47:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:13:57.535 00:47:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2679090 00:13:57.535 00:47:50 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:13:57.535 00:47:50 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:13:57.535 00:47:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2679090' 00:13:57.535 killing process with pid 2679090 00:13:57.535 00:47:50 -- common/autotest_common.sh@955 -- # kill 2679090 00:13:57.535 00:47:50 -- common/autotest_common.sh@960 -- # wait 2679090 00:13:58.107 00:47:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:13:58.107 00:47:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:13:58.107 00:47:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:13:58.107 00:47:50 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:58.107 00:47:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:58.107 00:47:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.107 00:47:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.107 00:47:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.034 00:47:52 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:00.034 00:14:00.034 real 0m41.025s 00:14:00.034 user 2m33.815s 00:14:00.034 sys 0m11.137s 00:14:00.034 00:47:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:00.034 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.034 ************************************ 00:14:00.034 END TEST nvmf_ns_hotplug_stress 00:14:00.034 ************************************ 00:14:00.034 00:47:52 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:00.034 00:47:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:00.034 00:47:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:00.034 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:14:00.034 ************************************ 00:14:00.034 START TEST nvmf_connect_stress 00:14:00.034 ************************************ 00:14:00.034 00:47:52 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:14:00.293 * Looking for test storage... 00:14:00.293 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:00.293 00:47:52 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.293 00:47:52 -- nvmf/common.sh@7 -- # uname -s 00:14:00.293 00:47:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.293 00:47:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.293 00:47:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.293 00:47:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.293 00:47:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:00.293 00:47:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:00.293 00:47:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.293 00:47:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:00.293 00:47:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.293 00:47:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:00.293 00:47:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:14:00.293 00:47:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:14:00.293 00:47:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.293 00:47:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:00.293 00:47:52 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:00.293 00:47:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.293 00:47:52 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:00.293 00:47:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.293 00:47:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.294 00:47:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.294 00:47:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.294 00:47:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.294 00:47:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.294 00:47:52 -- paths/export.sh@5 -- # export PATH 00:14:00.294 00:47:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.294 00:47:52 -- nvmf/common.sh@47 -- # : 0 00:14:00.294 00:47:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:00.294 00:47:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:00.294 00:47:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:00.294 00:47:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.294 00:47:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.294 00:47:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:00.294 00:47:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:00.294 00:47:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:00.294 00:47:52 -- target/connect_stress.sh@12 -- # nvmftestinit 00:14:00.294 00:47:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:00.294 00:47:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:00.294 00:47:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:00.294 00:47:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:00.294 00:47:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:00.294 00:47:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:00.294 00:47:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:00.294 00:47:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:00.294 00:47:52 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:00.294 00:47:52 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:00.294 00:47:52 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:00.294 00:47:52 -- common/autotest_common.sh@10 -- # set +x 00:14:05.635 00:47:57 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:05.635 00:47:57 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:05.635 00:47:57 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:05.635 00:47:57 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:05.635 00:47:57 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:05.635 00:47:57 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:05.635 00:47:57 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:05.635 00:47:57 -- nvmf/common.sh@295 -- # net_devs=() 00:14:05.635 00:47:57 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:05.635 00:47:57 -- nvmf/common.sh@296 -- # e810=() 00:14:05.635 00:47:57 -- nvmf/common.sh@296 -- # local -ga e810 00:14:05.635 00:47:57 -- nvmf/common.sh@297 -- # x722=() 00:14:05.635 00:47:57 -- nvmf/common.sh@297 -- # local -ga x722 00:14:05.635 00:47:57 -- nvmf/common.sh@298 -- # mlx=() 00:14:05.635 00:47:57 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:05.635 00:47:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:05.635 00:47:57 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:05.635 00:47:57 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:05.635 00:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.635 00:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:05.635 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:05.635 00:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:05.635 00:47:57 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:05.635 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:05.635 00:47:57 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:05.635 00:47:57 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:05.635 00:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.635 00:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.635 00:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.635 00:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.635 00:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:05.635 Found net devices under 0000:27:00.0: cvl_0_0 00:14:05.636 00:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.636 00:47:57 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:05.636 00:47:57 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:05.636 00:47:57 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:05.636 00:47:57 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:05.636 00:47:57 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:05.636 Found net devices under 0000:27:00.1: cvl_0_1 00:14:05.636 00:47:57 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:05.636 00:47:57 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:05.636 00:47:57 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:05.636 00:47:57 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:05.636 00:47:57 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:05.636 00:47:57 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:05.636 00:47:57 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:05.636 00:47:57 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:05.636 00:47:57 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:05.636 00:47:57 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:05.636 00:47:57 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:05.636 00:47:57 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:05.636 00:47:57 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:05.636 00:47:57 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:05.636 00:47:57 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:05.636 00:47:57 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:05.636 00:47:57 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:05.636 00:47:57 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:05.636 00:47:57 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:05.636 00:47:57 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:05.636 00:47:57 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:05.636 00:47:57 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:05.636 00:47:57 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:05.636 00:47:57 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:05.636 00:47:57 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:05.636 00:47:57 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:05.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:05.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:14:05.636 00:14:05.636 --- 10.0.0.2 ping statistics --- 00:14:05.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.636 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:14:05.636 00:47:57 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:05.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:05.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.065 ms 00:14:05.636 00:14:05.636 --- 10.0.0.1 ping statistics --- 00:14:05.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:05.636 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:14:05.636 00:47:57 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:05.636 00:47:57 -- nvmf/common.sh@411 -- # return 0 00:14:05.636 00:47:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:05.636 00:47:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:05.636 00:47:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:05.636 00:47:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:05.636 00:47:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:05.636 00:47:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:05.636 00:47:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:05.636 00:47:57 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:05.636 00:47:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:05.636 00:47:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:05.636 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:14:05.636 00:47:57 -- nvmf/common.sh@470 -- # nvmfpid=2689859 00:14:05.636 00:47:57 -- nvmf/common.sh@471 -- # waitforlisten 2689859 00:14:05.636 00:47:57 -- common/autotest_common.sh@817 -- # '[' -z 2689859 ']' 00:14:05.636 00:47:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:05.636 00:47:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:05.636 00:47:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:05.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:05.636 00:47:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:05.636 00:47:57 -- common/autotest_common.sh@10 -- # set +x 00:14:05.636 00:47:57 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:05.636 [2024-04-27 00:47:58.018434] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:05.636 [2024-04-27 00:47:58.018534] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:05.636 EAL: No free 2048 kB hugepages reported on node 1 00:14:05.636 [2024-04-27 00:47:58.139665] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:05.636 [2024-04-27 00:47:58.237050] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:05.636 [2024-04-27 00:47:58.237091] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:05.636 [2024-04-27 00:47:58.237104] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:05.636 [2024-04-27 00:47:58.237113] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:05.636 [2024-04-27 00:47:58.237120] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:05.636 [2024-04-27 00:47:58.237199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:05.636 [2024-04-27 00:47:58.237294] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.636 [2024-04-27 00:47:58.237304] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.208 00:47:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:06.208 00:47:58 -- common/autotest_common.sh@850 -- # return 0 00:14:06.208 00:47:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:06.208 00:47:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:06.208 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 00:47:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.208 00:47:58 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.208 00:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.208 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 [2024-04-27 00:47:58.762768] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.208 00:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.208 00:47:58 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:06.208 00:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.208 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 00:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.208 00:47:58 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.208 00:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.208 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 [2024-04-27 00:47:58.802680] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.208 00:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.208 00:47:58 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:06.208 00:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.208 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.208 NULL1 00:14:06.208 00:47:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.208 00:47:58 -- target/connect_stress.sh@21 -- # PERF_PID=2689897 00:14:06.208 00:47:58 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:06.209 00:47:58 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:06.209 00:47:58 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # seq 1 20 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:06.209 00:47:58 -- target/connect_stress.sh@28 -- # cat 00:14:06.209 00:47:58 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:06.209 00:47:58 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.209 00:47:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.209 00:47:58 -- common/autotest_common.sh@10 -- # set +x 00:14:06.776 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:06.776 00:47:59 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:06.776 00:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.776 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:06.776 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.034 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.034 00:47:59 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:07.034 00:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.034 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.034 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.293 00:47:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.293 00:47:59 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:07.293 00:47:59 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.293 00:47:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.293 00:47:59 -- common/autotest_common.sh@10 -- # set +x 00:14:07.554 00:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.554 00:48:00 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:07.554 00:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.554 00:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.554 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:14:07.814 00:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:07.814 00:48:00 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:07.814 00:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.814 00:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:07.814 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:14:08.383 00:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.383 00:48:00 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:08.383 00:48:00 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.383 00:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.383 00:48:00 -- common/autotest_common.sh@10 -- # set +x 00:14:08.641 00:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.641 00:48:01 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:08.641 00:48:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.641 00:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.641 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:14:08.900 00:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:08.900 00:48:01 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:08.900 00:48:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.900 00:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:08.900 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:14:09.159 00:48:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.159 00:48:01 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:09.159 00:48:01 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.159 00:48:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.160 00:48:01 -- common/autotest_common.sh@10 -- # set +x 00:14:09.420 00:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.420 00:48:02 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:09.420 00:48:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.420 00:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.420 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:14:09.988 00:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:09.988 00:48:02 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:09.988 00:48:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.988 00:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:09.988 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.246 00:48:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.246 00:48:02 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:10.246 00:48:02 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.246 00:48:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.246 00:48:02 -- common/autotest_common.sh@10 -- # set +x 00:14:10.503 00:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.503 00:48:03 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:10.503 00:48:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.503 00:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.503 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:14:10.762 00:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:10.762 00:48:03 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:10.762 00:48:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.762 00:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:10.762 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:14:11.021 00:48:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.021 00:48:03 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:11.021 00:48:03 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.021 00:48:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.021 00:48:03 -- common/autotest_common.sh@10 -- # set +x 00:14:11.589 00:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.589 00:48:04 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:11.589 00:48:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.589 00:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.589 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:14:11.847 00:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:11.847 00:48:04 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:11.847 00:48:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:11.847 00:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:11.847 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:14:12.105 00:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.105 00:48:04 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:12.105 00:48:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.105 00:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.105 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:14:12.364 00:48:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.364 00:48:04 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:12.364 00:48:04 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.364 00:48:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.364 00:48:04 -- common/autotest_common.sh@10 -- # set +x 00:14:12.625 00:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:12.625 00:48:05 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:12.625 00:48:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:12.625 00:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:12.625 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.190 00:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.190 00:48:05 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:13.190 00:48:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.190 00:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.190 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.448 00:48:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.448 00:48:05 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:13.448 00:48:05 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.448 00:48:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.448 00:48:05 -- common/autotest_common.sh@10 -- # set +x 00:14:13.705 00:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.705 00:48:06 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:13.705 00:48:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.705 00:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.705 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:14:13.965 00:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:13.965 00:48:06 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:13.965 00:48:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:13.965 00:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:13.965 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:14:14.226 00:48:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.226 00:48:06 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:14.226 00:48:06 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.226 00:48:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.226 00:48:06 -- common/autotest_common.sh@10 -- # set +x 00:14:14.794 00:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:14.794 00:48:07 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:14.794 00:48:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:14.794 00:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:14.794 00:48:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.051 00:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.051 00:48:07 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:15.051 00:48:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.051 00:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.051 00:48:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.309 00:48:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.309 00:48:07 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:15.309 00:48:07 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.309 00:48:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.309 00:48:07 -- common/autotest_common.sh@10 -- # set +x 00:14:15.568 00:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.568 00:48:08 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:15.568 00:48:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.568 00:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.568 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:14:15.826 00:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:15.826 00:48:08 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:15.826 00:48:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:15.826 00:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:15.826 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:14:16.392 00:48:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.392 00:48:08 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:16.392 00:48:08 -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:16.392 00:48:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:16.392 00:48:08 -- common/autotest_common.sh@10 -- # set +x 00:14:16.392 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:16.651 00:48:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:16.651 00:48:09 -- target/connect_stress.sh@34 -- # kill -0 2689897 00:14:16.651 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2689897) - No such process 00:14:16.651 00:48:09 -- target/connect_stress.sh@38 -- # wait 2689897 00:14:16.651 00:48:09 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:16.651 00:48:09 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:16.651 00:48:09 -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:16.651 00:48:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:16.651 00:48:09 -- nvmf/common.sh@117 -- # sync 00:14:16.651 00:48:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.651 00:48:09 -- nvmf/common.sh@120 -- # set +e 00:14:16.651 00:48:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.651 00:48:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.651 rmmod nvme_tcp 00:14:16.651 rmmod nvme_fabrics 00:14:16.651 rmmod nvme_keyring 00:14:16.651 00:48:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.651 00:48:09 -- nvmf/common.sh@124 -- # set -e 00:14:16.651 00:48:09 -- nvmf/common.sh@125 -- # return 0 00:14:16.651 00:48:09 -- nvmf/common.sh@478 -- # '[' -n 2689859 ']' 00:14:16.651 00:48:09 -- nvmf/common.sh@479 -- # killprocess 2689859 00:14:16.651 00:48:09 -- common/autotest_common.sh@936 -- # '[' -z 2689859 ']' 00:14:16.651 00:48:09 -- common/autotest_common.sh@940 -- # kill -0 2689859 00:14:16.651 00:48:09 -- common/autotest_common.sh@941 -- # uname 00:14:16.651 00:48:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:16.651 00:48:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2689859 00:14:16.651 00:48:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:16.651 00:48:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:16.651 00:48:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2689859' 00:14:16.651 killing process with pid 2689859 00:14:16.651 00:48:09 -- common/autotest_common.sh@955 -- # kill 2689859 00:14:16.651 00:48:09 -- common/autotest_common.sh@960 -- # wait 2689859 00:14:17.216 00:48:09 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:17.216 00:48:09 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:17.216 00:48:09 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:17.216 00:48:09 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:17.216 00:48:09 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:17.216 00:48:09 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.216 00:48:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:17.216 00:48:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.122 00:48:11 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.122 00:14:19.122 real 0m19.074s 00:14:19.122 user 0m43.898s 00:14:19.122 sys 0m5.556s 00:14:19.122 00:48:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:19.122 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:14:19.122 ************************************ 00:14:19.122 END TEST nvmf_connect_stress 00:14:19.122 ************************************ 00:14:19.382 00:48:11 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:19.382 00:48:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:19.382 00:48:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:19.382 00:48:11 -- common/autotest_common.sh@10 -- # set +x 00:14:19.382 ************************************ 00:14:19.382 START TEST nvmf_fused_ordering 00:14:19.382 ************************************ 00:14:19.382 00:48:11 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:19.382 * Looking for test storage... 00:14:19.382 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:19.382 00:48:11 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.382 00:48:11 -- nvmf/common.sh@7 -- # uname -s 00:14:19.382 00:48:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.382 00:48:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.382 00:48:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.382 00:48:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.382 00:48:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.382 00:48:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.382 00:48:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.382 00:48:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.382 00:48:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.382 00:48:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.382 00:48:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:14:19.382 00:48:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:14:19.382 00:48:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.382 00:48:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.382 00:48:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:19.382 00:48:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.382 00:48:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:19.382 00:48:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.382 00:48:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.382 00:48:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.382 00:48:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.382 00:48:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.382 00:48:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.382 00:48:12 -- paths/export.sh@5 -- # export PATH 00:14:19.382 00:48:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.382 00:48:12 -- nvmf/common.sh@47 -- # : 0 00:14:19.382 00:48:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.382 00:48:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.382 00:48:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.382 00:48:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.382 00:48:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.382 00:48:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.382 00:48:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.382 00:48:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.382 00:48:12 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:19.382 00:48:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:19.382 00:48:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.382 00:48:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:19.382 00:48:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:19.382 00:48:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:19.382 00:48:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.382 00:48:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:19.382 00:48:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.382 00:48:12 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:19.382 00:48:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:19.382 00:48:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.382 00:48:12 -- common/autotest_common.sh@10 -- # set +x 00:14:24.661 00:48:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:24.661 00:48:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:24.661 00:48:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:24.661 00:48:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:24.661 00:48:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:24.661 00:48:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:24.661 00:48:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:24.661 00:48:17 -- nvmf/common.sh@295 -- # net_devs=() 00:14:24.661 00:48:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:24.661 00:48:17 -- nvmf/common.sh@296 -- # e810=() 00:14:24.661 00:48:17 -- nvmf/common.sh@296 -- # local -ga e810 00:14:24.661 00:48:17 -- nvmf/common.sh@297 -- # x722=() 00:14:24.661 00:48:17 -- nvmf/common.sh@297 -- # local -ga x722 00:14:24.661 00:48:17 -- nvmf/common.sh@298 -- # mlx=() 00:14:24.661 00:48:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:24.661 00:48:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:24.661 00:48:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:24.661 00:48:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.661 00:48:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:24.661 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:24.661 00:48:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:24.661 00:48:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:24.661 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:24.661 00:48:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.661 00:48:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.661 00:48:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.661 00:48:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:24.661 Found net devices under 0000:27:00.0: cvl_0_0 00:14:24.661 00:48:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.661 00:48:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:24.661 00:48:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:24.661 00:48:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:24.661 00:48:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:24.661 Found net devices under 0000:27:00.1: cvl_0_1 00:14:24.661 00:48:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:24.661 00:48:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:24.661 00:48:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:24.661 00:48:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:24.661 00:48:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:24.661 00:48:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:24.661 00:48:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:24.661 00:48:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:24.661 00:48:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:24.661 00:48:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:24.661 00:48:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:24.661 00:48:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:24.661 00:48:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:24.661 00:48:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:24.661 00:48:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:24.661 00:48:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:24.661 00:48:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:24.661 00:48:17 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:24.661 00:48:17 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:24.661 00:48:17 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:24.661 00:48:17 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:24.661 00:48:17 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:24.661 00:48:17 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:24.661 00:48:17 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:24.661 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:24.661 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:14:24.661 00:14:24.661 --- 10.0.0.2 ping statistics --- 00:14:24.661 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.661 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:14:24.661 00:48:17 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:24.662 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:24.662 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:14:24.662 00:14:24.662 --- 10.0.0.1 ping statistics --- 00:14:24.662 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:24.662 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:14:24.662 00:48:17 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:24.662 00:48:17 -- nvmf/common.sh@411 -- # return 0 00:14:24.662 00:48:17 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:24.662 00:48:17 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:24.662 00:48:17 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:24.662 00:48:17 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:24.662 00:48:17 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:24.662 00:48:17 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:24.662 00:48:17 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:24.662 00:48:17 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:24.662 00:48:17 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:24.662 00:48:17 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:24.662 00:48:17 -- common/autotest_common.sh@10 -- # set +x 00:14:24.662 00:48:17 -- nvmf/common.sh@470 -- # nvmfpid=2696431 00:14:24.662 00:48:17 -- nvmf/common.sh@471 -- # waitforlisten 2696431 00:14:24.662 00:48:17 -- common/autotest_common.sh@817 -- # '[' -z 2696431 ']' 00:14:24.662 00:48:17 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:24.662 00:48:17 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:24.662 00:48:17 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:24.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:24.662 00:48:17 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:24.662 00:48:17 -- common/autotest_common.sh@10 -- # set +x 00:14:24.662 00:48:17 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:24.920 [2024-04-27 00:48:17.389924] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:24.920 [2024-04-27 00:48:17.390030] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:24.920 EAL: No free 2048 kB hugepages reported on node 1 00:14:24.920 [2024-04-27 00:48:17.536377] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:25.178 [2024-04-27 00:48:17.690355] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:25.178 [2024-04-27 00:48:17.690403] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:25.178 [2024-04-27 00:48:17.690418] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:25.178 [2024-04-27 00:48:17.690433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:25.178 [2024-04-27 00:48:17.690445] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:25.178 [2024-04-27 00:48:17.690489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:25.462 00:48:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:25.462 00:48:18 -- common/autotest_common.sh@850 -- # return 0 00:14:25.462 00:48:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:25.462 00:48:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:25.462 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.462 00:48:18 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:25.462 00:48:18 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:25.462 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.462 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.462 [2024-04-27 00:48:18.120099] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:25.462 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.462 00:48:18 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:25.462 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.462 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.462 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.462 00:48:18 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:25.462 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.462 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 [2024-04-27 00:48:18.144315] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:25.766 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.766 00:48:18 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:25.766 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.766 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 NULL1 00:14:25.766 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.766 00:48:18 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:25.766 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.766 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.766 00:48:18 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:25.766 00:48:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:25.766 00:48:18 -- common/autotest_common.sh@10 -- # set +x 00:14:25.766 00:48:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:25.766 00:48:18 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:25.766 [2024-04-27 00:48:18.213835] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:25.766 [2024-04-27 00:48:18.213910] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2696739 ] 00:14:25.766 EAL: No free 2048 kB hugepages reported on node 1 00:14:26.027 Attached to nqn.2016-06.io.spdk:cnode1 00:14:26.027 Namespace ID: 1 size: 1GB 00:14:26.027 fused_ordering(0) 00:14:26.027 fused_ordering(1) 00:14:26.027 fused_ordering(2) 00:14:26.027 fused_ordering(3) 00:14:26.027 fused_ordering(4) 00:14:26.027 fused_ordering(5) 00:14:26.027 fused_ordering(6) 00:14:26.027 fused_ordering(7) 00:14:26.027 fused_ordering(8) 00:14:26.027 fused_ordering(9) 00:14:26.027 fused_ordering(10) 00:14:26.027 fused_ordering(11) 00:14:26.027 fused_ordering(12) 00:14:26.027 fused_ordering(13) 00:14:26.027 fused_ordering(14) 00:14:26.027 fused_ordering(15) 00:14:26.027 fused_ordering(16) 00:14:26.027 fused_ordering(17) 00:14:26.027 fused_ordering(18) 00:14:26.027 fused_ordering(19) 00:14:26.027 fused_ordering(20) 00:14:26.027 fused_ordering(21) 00:14:26.027 fused_ordering(22) 00:14:26.027 fused_ordering(23) 00:14:26.027 fused_ordering(24) 00:14:26.027 fused_ordering(25) 00:14:26.027 fused_ordering(26) 00:14:26.027 fused_ordering(27) 00:14:26.027 fused_ordering(28) 00:14:26.027 fused_ordering(29) 00:14:26.027 fused_ordering(30) 00:14:26.028 fused_ordering(31) 00:14:26.028 fused_ordering(32) 00:14:26.028 fused_ordering(33) 00:14:26.028 fused_ordering(34) 00:14:26.028 fused_ordering(35) 00:14:26.028 fused_ordering(36) 00:14:26.028 fused_ordering(37) 00:14:26.028 fused_ordering(38) 00:14:26.028 fused_ordering(39) 00:14:26.028 fused_ordering(40) 00:14:26.028 fused_ordering(41) 00:14:26.028 fused_ordering(42) 00:14:26.028 fused_ordering(43) 00:14:26.028 fused_ordering(44) 00:14:26.028 fused_ordering(45) 00:14:26.028 fused_ordering(46) 00:14:26.028 fused_ordering(47) 00:14:26.028 fused_ordering(48) 00:14:26.028 fused_ordering(49) 00:14:26.028 fused_ordering(50) 00:14:26.028 fused_ordering(51) 00:14:26.028 fused_ordering(52) 00:14:26.028 fused_ordering(53) 00:14:26.028 fused_ordering(54) 00:14:26.028 fused_ordering(55) 00:14:26.028 fused_ordering(56) 00:14:26.028 fused_ordering(57) 00:14:26.028 fused_ordering(58) 00:14:26.028 fused_ordering(59) 00:14:26.028 fused_ordering(60) 00:14:26.028 fused_ordering(61) 00:14:26.028 fused_ordering(62) 00:14:26.028 fused_ordering(63) 00:14:26.028 fused_ordering(64) 00:14:26.028 fused_ordering(65) 00:14:26.028 fused_ordering(66) 00:14:26.028 fused_ordering(67) 00:14:26.028 fused_ordering(68) 00:14:26.028 fused_ordering(69) 00:14:26.028 fused_ordering(70) 00:14:26.028 fused_ordering(71) 00:14:26.028 fused_ordering(72) 00:14:26.028 fused_ordering(73) 00:14:26.028 fused_ordering(74) 00:14:26.028 fused_ordering(75) 00:14:26.028 fused_ordering(76) 00:14:26.028 fused_ordering(77) 00:14:26.028 fused_ordering(78) 00:14:26.028 fused_ordering(79) 00:14:26.028 fused_ordering(80) 00:14:26.028 fused_ordering(81) 00:14:26.028 fused_ordering(82) 00:14:26.028 fused_ordering(83) 00:14:26.028 fused_ordering(84) 00:14:26.028 fused_ordering(85) 00:14:26.028 fused_ordering(86) 00:14:26.028 fused_ordering(87) 00:14:26.028 fused_ordering(88) 00:14:26.028 fused_ordering(89) 00:14:26.028 fused_ordering(90) 00:14:26.028 fused_ordering(91) 00:14:26.028 fused_ordering(92) 00:14:26.028 fused_ordering(93) 00:14:26.028 fused_ordering(94) 00:14:26.028 fused_ordering(95) 00:14:26.028 fused_ordering(96) 00:14:26.028 fused_ordering(97) 00:14:26.028 fused_ordering(98) 00:14:26.028 fused_ordering(99) 00:14:26.028 fused_ordering(100) 00:14:26.028 fused_ordering(101) 00:14:26.028 fused_ordering(102) 00:14:26.028 fused_ordering(103) 00:14:26.028 fused_ordering(104) 00:14:26.028 fused_ordering(105) 00:14:26.028 fused_ordering(106) 00:14:26.028 fused_ordering(107) 00:14:26.028 fused_ordering(108) 00:14:26.028 fused_ordering(109) 00:14:26.028 fused_ordering(110) 00:14:26.028 fused_ordering(111) 00:14:26.028 fused_ordering(112) 00:14:26.028 fused_ordering(113) 00:14:26.028 fused_ordering(114) 00:14:26.028 fused_ordering(115) 00:14:26.028 fused_ordering(116) 00:14:26.028 fused_ordering(117) 00:14:26.028 fused_ordering(118) 00:14:26.028 fused_ordering(119) 00:14:26.028 fused_ordering(120) 00:14:26.028 fused_ordering(121) 00:14:26.028 fused_ordering(122) 00:14:26.028 fused_ordering(123) 00:14:26.028 fused_ordering(124) 00:14:26.028 fused_ordering(125) 00:14:26.028 fused_ordering(126) 00:14:26.028 fused_ordering(127) 00:14:26.028 fused_ordering(128) 00:14:26.028 fused_ordering(129) 00:14:26.028 fused_ordering(130) 00:14:26.028 fused_ordering(131) 00:14:26.028 fused_ordering(132) 00:14:26.028 fused_ordering(133) 00:14:26.028 fused_ordering(134) 00:14:26.028 fused_ordering(135) 00:14:26.028 fused_ordering(136) 00:14:26.028 fused_ordering(137) 00:14:26.028 fused_ordering(138) 00:14:26.028 fused_ordering(139) 00:14:26.028 fused_ordering(140) 00:14:26.028 fused_ordering(141) 00:14:26.028 fused_ordering(142) 00:14:26.028 fused_ordering(143) 00:14:26.028 fused_ordering(144) 00:14:26.028 fused_ordering(145) 00:14:26.028 fused_ordering(146) 00:14:26.028 fused_ordering(147) 00:14:26.028 fused_ordering(148) 00:14:26.028 fused_ordering(149) 00:14:26.028 fused_ordering(150) 00:14:26.028 fused_ordering(151) 00:14:26.028 fused_ordering(152) 00:14:26.028 fused_ordering(153) 00:14:26.028 fused_ordering(154) 00:14:26.028 fused_ordering(155) 00:14:26.028 fused_ordering(156) 00:14:26.028 fused_ordering(157) 00:14:26.028 fused_ordering(158) 00:14:26.028 fused_ordering(159) 00:14:26.028 fused_ordering(160) 00:14:26.028 fused_ordering(161) 00:14:26.028 fused_ordering(162) 00:14:26.028 fused_ordering(163) 00:14:26.028 fused_ordering(164) 00:14:26.028 fused_ordering(165) 00:14:26.028 fused_ordering(166) 00:14:26.028 fused_ordering(167) 00:14:26.028 fused_ordering(168) 00:14:26.028 fused_ordering(169) 00:14:26.028 fused_ordering(170) 00:14:26.028 fused_ordering(171) 00:14:26.028 fused_ordering(172) 00:14:26.028 fused_ordering(173) 00:14:26.028 fused_ordering(174) 00:14:26.028 fused_ordering(175) 00:14:26.028 fused_ordering(176) 00:14:26.028 fused_ordering(177) 00:14:26.028 fused_ordering(178) 00:14:26.028 fused_ordering(179) 00:14:26.028 fused_ordering(180) 00:14:26.028 fused_ordering(181) 00:14:26.028 fused_ordering(182) 00:14:26.028 fused_ordering(183) 00:14:26.028 fused_ordering(184) 00:14:26.028 fused_ordering(185) 00:14:26.028 fused_ordering(186) 00:14:26.028 fused_ordering(187) 00:14:26.028 fused_ordering(188) 00:14:26.028 fused_ordering(189) 00:14:26.028 fused_ordering(190) 00:14:26.028 fused_ordering(191) 00:14:26.028 fused_ordering(192) 00:14:26.028 fused_ordering(193) 00:14:26.028 fused_ordering(194) 00:14:26.028 fused_ordering(195) 00:14:26.028 fused_ordering(196) 00:14:26.028 fused_ordering(197) 00:14:26.028 fused_ordering(198) 00:14:26.028 fused_ordering(199) 00:14:26.028 fused_ordering(200) 00:14:26.028 fused_ordering(201) 00:14:26.028 fused_ordering(202) 00:14:26.028 fused_ordering(203) 00:14:26.028 fused_ordering(204) 00:14:26.028 fused_ordering(205) 00:14:26.287 fused_ordering(206) 00:14:26.287 fused_ordering(207) 00:14:26.287 fused_ordering(208) 00:14:26.287 fused_ordering(209) 00:14:26.287 fused_ordering(210) 00:14:26.287 fused_ordering(211) 00:14:26.287 fused_ordering(212) 00:14:26.287 fused_ordering(213) 00:14:26.287 fused_ordering(214) 00:14:26.287 fused_ordering(215) 00:14:26.287 fused_ordering(216) 00:14:26.287 fused_ordering(217) 00:14:26.287 fused_ordering(218) 00:14:26.287 fused_ordering(219) 00:14:26.287 fused_ordering(220) 00:14:26.287 fused_ordering(221) 00:14:26.287 fused_ordering(222) 00:14:26.287 fused_ordering(223) 00:14:26.287 fused_ordering(224) 00:14:26.287 fused_ordering(225) 00:14:26.287 fused_ordering(226) 00:14:26.287 fused_ordering(227) 00:14:26.287 fused_ordering(228) 00:14:26.287 fused_ordering(229) 00:14:26.287 fused_ordering(230) 00:14:26.287 fused_ordering(231) 00:14:26.287 fused_ordering(232) 00:14:26.287 fused_ordering(233) 00:14:26.287 fused_ordering(234) 00:14:26.287 fused_ordering(235) 00:14:26.287 fused_ordering(236) 00:14:26.287 fused_ordering(237) 00:14:26.287 fused_ordering(238) 00:14:26.287 fused_ordering(239) 00:14:26.287 fused_ordering(240) 00:14:26.287 fused_ordering(241) 00:14:26.287 fused_ordering(242) 00:14:26.287 fused_ordering(243) 00:14:26.287 fused_ordering(244) 00:14:26.287 fused_ordering(245) 00:14:26.287 fused_ordering(246) 00:14:26.287 fused_ordering(247) 00:14:26.287 fused_ordering(248) 00:14:26.287 fused_ordering(249) 00:14:26.287 fused_ordering(250) 00:14:26.287 fused_ordering(251) 00:14:26.287 fused_ordering(252) 00:14:26.287 fused_ordering(253) 00:14:26.287 fused_ordering(254) 00:14:26.287 fused_ordering(255) 00:14:26.287 fused_ordering(256) 00:14:26.287 fused_ordering(257) 00:14:26.287 fused_ordering(258) 00:14:26.287 fused_ordering(259) 00:14:26.287 fused_ordering(260) 00:14:26.287 fused_ordering(261) 00:14:26.287 fused_ordering(262) 00:14:26.287 fused_ordering(263) 00:14:26.287 fused_ordering(264) 00:14:26.287 fused_ordering(265) 00:14:26.287 fused_ordering(266) 00:14:26.287 fused_ordering(267) 00:14:26.287 fused_ordering(268) 00:14:26.287 fused_ordering(269) 00:14:26.287 fused_ordering(270) 00:14:26.287 fused_ordering(271) 00:14:26.287 fused_ordering(272) 00:14:26.287 fused_ordering(273) 00:14:26.287 fused_ordering(274) 00:14:26.287 fused_ordering(275) 00:14:26.287 fused_ordering(276) 00:14:26.287 fused_ordering(277) 00:14:26.287 fused_ordering(278) 00:14:26.287 fused_ordering(279) 00:14:26.287 fused_ordering(280) 00:14:26.287 fused_ordering(281) 00:14:26.287 fused_ordering(282) 00:14:26.287 fused_ordering(283) 00:14:26.287 fused_ordering(284) 00:14:26.287 fused_ordering(285) 00:14:26.287 fused_ordering(286) 00:14:26.287 fused_ordering(287) 00:14:26.287 fused_ordering(288) 00:14:26.287 fused_ordering(289) 00:14:26.287 fused_ordering(290) 00:14:26.287 fused_ordering(291) 00:14:26.287 fused_ordering(292) 00:14:26.287 fused_ordering(293) 00:14:26.287 fused_ordering(294) 00:14:26.287 fused_ordering(295) 00:14:26.287 fused_ordering(296) 00:14:26.287 fused_ordering(297) 00:14:26.287 fused_ordering(298) 00:14:26.287 fused_ordering(299) 00:14:26.287 fused_ordering(300) 00:14:26.287 fused_ordering(301) 00:14:26.287 fused_ordering(302) 00:14:26.287 fused_ordering(303) 00:14:26.287 fused_ordering(304) 00:14:26.288 fused_ordering(305) 00:14:26.288 fused_ordering(306) 00:14:26.288 fused_ordering(307) 00:14:26.288 fused_ordering(308) 00:14:26.288 fused_ordering(309) 00:14:26.288 fused_ordering(310) 00:14:26.288 fused_ordering(311) 00:14:26.288 fused_ordering(312) 00:14:26.288 fused_ordering(313) 00:14:26.288 fused_ordering(314) 00:14:26.288 fused_ordering(315) 00:14:26.288 fused_ordering(316) 00:14:26.288 fused_ordering(317) 00:14:26.288 fused_ordering(318) 00:14:26.288 fused_ordering(319) 00:14:26.288 fused_ordering(320) 00:14:26.288 fused_ordering(321) 00:14:26.288 fused_ordering(322) 00:14:26.288 fused_ordering(323) 00:14:26.288 fused_ordering(324) 00:14:26.288 fused_ordering(325) 00:14:26.288 fused_ordering(326) 00:14:26.288 fused_ordering(327) 00:14:26.288 fused_ordering(328) 00:14:26.288 fused_ordering(329) 00:14:26.288 fused_ordering(330) 00:14:26.288 fused_ordering(331) 00:14:26.288 fused_ordering(332) 00:14:26.288 fused_ordering(333) 00:14:26.288 fused_ordering(334) 00:14:26.288 fused_ordering(335) 00:14:26.288 fused_ordering(336) 00:14:26.288 fused_ordering(337) 00:14:26.288 fused_ordering(338) 00:14:26.288 fused_ordering(339) 00:14:26.288 fused_ordering(340) 00:14:26.288 fused_ordering(341) 00:14:26.288 fused_ordering(342) 00:14:26.288 fused_ordering(343) 00:14:26.288 fused_ordering(344) 00:14:26.288 fused_ordering(345) 00:14:26.288 fused_ordering(346) 00:14:26.288 fused_ordering(347) 00:14:26.288 fused_ordering(348) 00:14:26.288 fused_ordering(349) 00:14:26.288 fused_ordering(350) 00:14:26.288 fused_ordering(351) 00:14:26.288 fused_ordering(352) 00:14:26.288 fused_ordering(353) 00:14:26.288 fused_ordering(354) 00:14:26.288 fused_ordering(355) 00:14:26.288 fused_ordering(356) 00:14:26.288 fused_ordering(357) 00:14:26.288 fused_ordering(358) 00:14:26.288 fused_ordering(359) 00:14:26.288 fused_ordering(360) 00:14:26.288 fused_ordering(361) 00:14:26.288 fused_ordering(362) 00:14:26.288 fused_ordering(363) 00:14:26.288 fused_ordering(364) 00:14:26.288 fused_ordering(365) 00:14:26.288 fused_ordering(366) 00:14:26.288 fused_ordering(367) 00:14:26.288 fused_ordering(368) 00:14:26.288 fused_ordering(369) 00:14:26.288 fused_ordering(370) 00:14:26.288 fused_ordering(371) 00:14:26.288 fused_ordering(372) 00:14:26.288 fused_ordering(373) 00:14:26.288 fused_ordering(374) 00:14:26.288 fused_ordering(375) 00:14:26.288 fused_ordering(376) 00:14:26.288 fused_ordering(377) 00:14:26.288 fused_ordering(378) 00:14:26.288 fused_ordering(379) 00:14:26.288 fused_ordering(380) 00:14:26.288 fused_ordering(381) 00:14:26.288 fused_ordering(382) 00:14:26.288 fused_ordering(383) 00:14:26.288 fused_ordering(384) 00:14:26.288 fused_ordering(385) 00:14:26.288 fused_ordering(386) 00:14:26.288 fused_ordering(387) 00:14:26.288 fused_ordering(388) 00:14:26.288 fused_ordering(389) 00:14:26.288 fused_ordering(390) 00:14:26.288 fused_ordering(391) 00:14:26.288 fused_ordering(392) 00:14:26.288 fused_ordering(393) 00:14:26.288 fused_ordering(394) 00:14:26.288 fused_ordering(395) 00:14:26.288 fused_ordering(396) 00:14:26.288 fused_ordering(397) 00:14:26.288 fused_ordering(398) 00:14:26.288 fused_ordering(399) 00:14:26.288 fused_ordering(400) 00:14:26.288 fused_ordering(401) 00:14:26.288 fused_ordering(402) 00:14:26.288 fused_ordering(403) 00:14:26.288 fused_ordering(404) 00:14:26.288 fused_ordering(405) 00:14:26.288 fused_ordering(406) 00:14:26.288 fused_ordering(407) 00:14:26.288 fused_ordering(408) 00:14:26.288 fused_ordering(409) 00:14:26.288 fused_ordering(410) 00:14:26.853 fused_ordering(411) 00:14:26.853 fused_ordering(412) 00:14:26.853 fused_ordering(413) 00:14:26.853 fused_ordering(414) 00:14:26.853 fused_ordering(415) 00:14:26.853 fused_ordering(416) 00:14:26.853 fused_ordering(417) 00:14:26.853 fused_ordering(418) 00:14:26.853 fused_ordering(419) 00:14:26.853 fused_ordering(420) 00:14:26.853 fused_ordering(421) 00:14:26.853 fused_ordering(422) 00:14:26.853 fused_ordering(423) 00:14:26.853 fused_ordering(424) 00:14:26.853 fused_ordering(425) 00:14:26.853 fused_ordering(426) 00:14:26.853 fused_ordering(427) 00:14:26.853 fused_ordering(428) 00:14:26.853 fused_ordering(429) 00:14:26.853 fused_ordering(430) 00:14:26.853 fused_ordering(431) 00:14:26.853 fused_ordering(432) 00:14:26.853 fused_ordering(433) 00:14:26.853 fused_ordering(434) 00:14:26.853 fused_ordering(435) 00:14:26.853 fused_ordering(436) 00:14:26.853 fused_ordering(437) 00:14:26.853 fused_ordering(438) 00:14:26.853 fused_ordering(439) 00:14:26.853 fused_ordering(440) 00:14:26.853 fused_ordering(441) 00:14:26.853 fused_ordering(442) 00:14:26.853 fused_ordering(443) 00:14:26.853 fused_ordering(444) 00:14:26.853 fused_ordering(445) 00:14:26.853 fused_ordering(446) 00:14:26.853 fused_ordering(447) 00:14:26.853 fused_ordering(448) 00:14:26.853 fused_ordering(449) 00:14:26.853 fused_ordering(450) 00:14:26.853 fused_ordering(451) 00:14:26.853 fused_ordering(452) 00:14:26.853 fused_ordering(453) 00:14:26.853 fused_ordering(454) 00:14:26.853 fused_ordering(455) 00:14:26.853 fused_ordering(456) 00:14:26.853 fused_ordering(457) 00:14:26.853 fused_ordering(458) 00:14:26.853 fused_ordering(459) 00:14:26.853 fused_ordering(460) 00:14:26.853 fused_ordering(461) 00:14:26.853 fused_ordering(462) 00:14:26.853 fused_ordering(463) 00:14:26.853 fused_ordering(464) 00:14:26.853 fused_ordering(465) 00:14:26.853 fused_ordering(466) 00:14:26.853 fused_ordering(467) 00:14:26.853 fused_ordering(468) 00:14:26.853 fused_ordering(469) 00:14:26.853 fused_ordering(470) 00:14:26.854 fused_ordering(471) 00:14:26.854 fused_ordering(472) 00:14:26.854 fused_ordering(473) 00:14:26.854 fused_ordering(474) 00:14:26.854 fused_ordering(475) 00:14:26.854 fused_ordering(476) 00:14:26.854 fused_ordering(477) 00:14:26.854 fused_ordering(478) 00:14:26.854 fused_ordering(479) 00:14:26.854 fused_ordering(480) 00:14:26.854 fused_ordering(481) 00:14:26.854 fused_ordering(482) 00:14:26.854 fused_ordering(483) 00:14:26.854 fused_ordering(484) 00:14:26.854 fused_ordering(485) 00:14:26.854 fused_ordering(486) 00:14:26.854 fused_ordering(487) 00:14:26.854 fused_ordering(488) 00:14:26.854 fused_ordering(489) 00:14:26.854 fused_ordering(490) 00:14:26.854 fused_ordering(491) 00:14:26.854 fused_ordering(492) 00:14:26.854 fused_ordering(493) 00:14:26.854 fused_ordering(494) 00:14:26.854 fused_ordering(495) 00:14:26.854 fused_ordering(496) 00:14:26.854 fused_ordering(497) 00:14:26.854 fused_ordering(498) 00:14:26.854 fused_ordering(499) 00:14:26.854 fused_ordering(500) 00:14:26.854 fused_ordering(501) 00:14:26.854 fused_ordering(502) 00:14:26.854 fused_ordering(503) 00:14:26.854 fused_ordering(504) 00:14:26.854 fused_ordering(505) 00:14:26.854 fused_ordering(506) 00:14:26.854 fused_ordering(507) 00:14:26.854 fused_ordering(508) 00:14:26.854 fused_ordering(509) 00:14:26.854 fused_ordering(510) 00:14:26.854 fused_ordering(511) 00:14:26.854 fused_ordering(512) 00:14:26.854 fused_ordering(513) 00:14:26.854 fused_ordering(514) 00:14:26.854 fused_ordering(515) 00:14:26.854 fused_ordering(516) 00:14:26.854 fused_ordering(517) 00:14:26.854 fused_ordering(518) 00:14:26.854 fused_ordering(519) 00:14:26.854 fused_ordering(520) 00:14:26.854 fused_ordering(521) 00:14:26.854 fused_ordering(522) 00:14:26.854 fused_ordering(523) 00:14:26.854 fused_ordering(524) 00:14:26.854 fused_ordering(525) 00:14:26.854 fused_ordering(526) 00:14:26.854 fused_ordering(527) 00:14:26.854 fused_ordering(528) 00:14:26.854 fused_ordering(529) 00:14:26.854 fused_ordering(530) 00:14:26.854 fused_ordering(531) 00:14:26.854 fused_ordering(532) 00:14:26.854 fused_ordering(533) 00:14:26.854 fused_ordering(534) 00:14:26.854 fused_ordering(535) 00:14:26.854 fused_ordering(536) 00:14:26.854 fused_ordering(537) 00:14:26.854 fused_ordering(538) 00:14:26.854 fused_ordering(539) 00:14:26.854 fused_ordering(540) 00:14:26.854 fused_ordering(541) 00:14:26.854 fused_ordering(542) 00:14:26.854 fused_ordering(543) 00:14:26.854 fused_ordering(544) 00:14:26.854 fused_ordering(545) 00:14:26.854 fused_ordering(546) 00:14:26.854 fused_ordering(547) 00:14:26.854 fused_ordering(548) 00:14:26.854 fused_ordering(549) 00:14:26.854 fused_ordering(550) 00:14:26.854 fused_ordering(551) 00:14:26.854 fused_ordering(552) 00:14:26.854 fused_ordering(553) 00:14:26.854 fused_ordering(554) 00:14:26.854 fused_ordering(555) 00:14:26.854 fused_ordering(556) 00:14:26.854 fused_ordering(557) 00:14:26.854 fused_ordering(558) 00:14:26.854 fused_ordering(559) 00:14:26.854 fused_ordering(560) 00:14:26.854 fused_ordering(561) 00:14:26.854 fused_ordering(562) 00:14:26.854 fused_ordering(563) 00:14:26.854 fused_ordering(564) 00:14:26.854 fused_ordering(565) 00:14:26.854 fused_ordering(566) 00:14:26.854 fused_ordering(567) 00:14:26.854 fused_ordering(568) 00:14:26.854 fused_ordering(569) 00:14:26.854 fused_ordering(570) 00:14:26.854 fused_ordering(571) 00:14:26.854 fused_ordering(572) 00:14:26.854 fused_ordering(573) 00:14:26.854 fused_ordering(574) 00:14:26.854 fused_ordering(575) 00:14:26.854 fused_ordering(576) 00:14:26.854 fused_ordering(577) 00:14:26.854 fused_ordering(578) 00:14:26.854 fused_ordering(579) 00:14:26.854 fused_ordering(580) 00:14:26.854 fused_ordering(581) 00:14:26.854 fused_ordering(582) 00:14:26.854 fused_ordering(583) 00:14:26.854 fused_ordering(584) 00:14:26.854 fused_ordering(585) 00:14:26.854 fused_ordering(586) 00:14:26.854 fused_ordering(587) 00:14:26.854 fused_ordering(588) 00:14:26.854 fused_ordering(589) 00:14:26.854 fused_ordering(590) 00:14:26.854 fused_ordering(591) 00:14:26.854 fused_ordering(592) 00:14:26.854 fused_ordering(593) 00:14:26.854 fused_ordering(594) 00:14:26.854 fused_ordering(595) 00:14:26.854 fused_ordering(596) 00:14:26.854 fused_ordering(597) 00:14:26.854 fused_ordering(598) 00:14:26.854 fused_ordering(599) 00:14:26.854 fused_ordering(600) 00:14:26.854 fused_ordering(601) 00:14:26.854 fused_ordering(602) 00:14:26.854 fused_ordering(603) 00:14:26.854 fused_ordering(604) 00:14:26.854 fused_ordering(605) 00:14:26.854 fused_ordering(606) 00:14:26.854 fused_ordering(607) 00:14:26.854 fused_ordering(608) 00:14:26.854 fused_ordering(609) 00:14:26.854 fused_ordering(610) 00:14:26.854 fused_ordering(611) 00:14:26.854 fused_ordering(612) 00:14:26.854 fused_ordering(613) 00:14:26.854 fused_ordering(614) 00:14:26.854 fused_ordering(615) 00:14:27.113 fused_ordering(616) 00:14:27.113 fused_ordering(617) 00:14:27.113 fused_ordering(618) 00:14:27.113 fused_ordering(619) 00:14:27.113 fused_ordering(620) 00:14:27.113 fused_ordering(621) 00:14:27.113 fused_ordering(622) 00:14:27.113 fused_ordering(623) 00:14:27.113 fused_ordering(624) 00:14:27.113 fused_ordering(625) 00:14:27.113 fused_ordering(626) 00:14:27.113 fused_ordering(627) 00:14:27.113 fused_ordering(628) 00:14:27.113 fused_ordering(629) 00:14:27.113 fused_ordering(630) 00:14:27.113 fused_ordering(631) 00:14:27.113 fused_ordering(632) 00:14:27.113 fused_ordering(633) 00:14:27.113 fused_ordering(634) 00:14:27.113 fused_ordering(635) 00:14:27.113 fused_ordering(636) 00:14:27.113 fused_ordering(637) 00:14:27.113 fused_ordering(638) 00:14:27.113 fused_ordering(639) 00:14:27.113 fused_ordering(640) 00:14:27.113 fused_ordering(641) 00:14:27.113 fused_ordering(642) 00:14:27.113 fused_ordering(643) 00:14:27.113 fused_ordering(644) 00:14:27.113 fused_ordering(645) 00:14:27.113 fused_ordering(646) 00:14:27.113 fused_ordering(647) 00:14:27.113 fused_ordering(648) 00:14:27.113 fused_ordering(649) 00:14:27.113 fused_ordering(650) 00:14:27.113 fused_ordering(651) 00:14:27.113 fused_ordering(652) 00:14:27.113 fused_ordering(653) 00:14:27.113 fused_ordering(654) 00:14:27.113 fused_ordering(655) 00:14:27.113 fused_ordering(656) 00:14:27.113 fused_ordering(657) 00:14:27.113 fused_ordering(658) 00:14:27.113 fused_ordering(659) 00:14:27.113 fused_ordering(660) 00:14:27.113 fused_ordering(661) 00:14:27.113 fused_ordering(662) 00:14:27.113 fused_ordering(663) 00:14:27.113 fused_ordering(664) 00:14:27.113 fused_ordering(665) 00:14:27.113 fused_ordering(666) 00:14:27.113 fused_ordering(667) 00:14:27.113 fused_ordering(668) 00:14:27.113 fused_ordering(669) 00:14:27.113 fused_ordering(670) 00:14:27.113 fused_ordering(671) 00:14:27.113 fused_ordering(672) 00:14:27.113 fused_ordering(673) 00:14:27.113 fused_ordering(674) 00:14:27.113 fused_ordering(675) 00:14:27.113 fused_ordering(676) 00:14:27.113 fused_ordering(677) 00:14:27.113 fused_ordering(678) 00:14:27.113 fused_ordering(679) 00:14:27.113 fused_ordering(680) 00:14:27.113 fused_ordering(681) 00:14:27.113 fused_ordering(682) 00:14:27.113 fused_ordering(683) 00:14:27.113 fused_ordering(684) 00:14:27.113 fused_ordering(685) 00:14:27.113 fused_ordering(686) 00:14:27.113 fused_ordering(687) 00:14:27.113 fused_ordering(688) 00:14:27.113 fused_ordering(689) 00:14:27.113 fused_ordering(690) 00:14:27.113 fused_ordering(691) 00:14:27.113 fused_ordering(692) 00:14:27.113 fused_ordering(693) 00:14:27.113 fused_ordering(694) 00:14:27.113 fused_ordering(695) 00:14:27.113 fused_ordering(696) 00:14:27.113 fused_ordering(697) 00:14:27.113 fused_ordering(698) 00:14:27.113 fused_ordering(699) 00:14:27.113 fused_ordering(700) 00:14:27.113 fused_ordering(701) 00:14:27.113 fused_ordering(702) 00:14:27.113 fused_ordering(703) 00:14:27.113 fused_ordering(704) 00:14:27.113 fused_ordering(705) 00:14:27.113 fused_ordering(706) 00:14:27.113 fused_ordering(707) 00:14:27.113 fused_ordering(708) 00:14:27.113 fused_ordering(709) 00:14:27.113 fused_ordering(710) 00:14:27.113 fused_ordering(711) 00:14:27.113 fused_ordering(712) 00:14:27.113 fused_ordering(713) 00:14:27.113 fused_ordering(714) 00:14:27.113 fused_ordering(715) 00:14:27.113 fused_ordering(716) 00:14:27.113 fused_ordering(717) 00:14:27.113 fused_ordering(718) 00:14:27.113 fused_ordering(719) 00:14:27.113 fused_ordering(720) 00:14:27.113 fused_ordering(721) 00:14:27.113 fused_ordering(722) 00:14:27.113 fused_ordering(723) 00:14:27.113 fused_ordering(724) 00:14:27.113 fused_ordering(725) 00:14:27.113 fused_ordering(726) 00:14:27.113 fused_ordering(727) 00:14:27.113 fused_ordering(728) 00:14:27.113 fused_ordering(729) 00:14:27.113 fused_ordering(730) 00:14:27.113 fused_ordering(731) 00:14:27.113 fused_ordering(732) 00:14:27.113 fused_ordering(733) 00:14:27.113 fused_ordering(734) 00:14:27.113 fused_ordering(735) 00:14:27.113 fused_ordering(736) 00:14:27.113 fused_ordering(737) 00:14:27.113 fused_ordering(738) 00:14:27.113 fused_ordering(739) 00:14:27.113 fused_ordering(740) 00:14:27.113 fused_ordering(741) 00:14:27.113 fused_ordering(742) 00:14:27.113 fused_ordering(743) 00:14:27.113 fused_ordering(744) 00:14:27.113 fused_ordering(745) 00:14:27.113 fused_ordering(746) 00:14:27.113 fused_ordering(747) 00:14:27.113 fused_ordering(748) 00:14:27.113 fused_ordering(749) 00:14:27.113 fused_ordering(750) 00:14:27.113 fused_ordering(751) 00:14:27.113 fused_ordering(752) 00:14:27.113 fused_ordering(753) 00:14:27.113 fused_ordering(754) 00:14:27.113 fused_ordering(755) 00:14:27.113 fused_ordering(756) 00:14:27.113 fused_ordering(757) 00:14:27.113 fused_ordering(758) 00:14:27.113 fused_ordering(759) 00:14:27.113 fused_ordering(760) 00:14:27.113 fused_ordering(761) 00:14:27.113 fused_ordering(762) 00:14:27.113 fused_ordering(763) 00:14:27.113 fused_ordering(764) 00:14:27.113 fused_ordering(765) 00:14:27.113 fused_ordering(766) 00:14:27.113 fused_ordering(767) 00:14:27.113 fused_ordering(768) 00:14:27.113 fused_ordering(769) 00:14:27.113 fused_ordering(770) 00:14:27.113 fused_ordering(771) 00:14:27.113 fused_ordering(772) 00:14:27.113 fused_ordering(773) 00:14:27.113 fused_ordering(774) 00:14:27.113 fused_ordering(775) 00:14:27.113 fused_ordering(776) 00:14:27.113 fused_ordering(777) 00:14:27.113 fused_ordering(778) 00:14:27.113 fused_ordering(779) 00:14:27.113 fused_ordering(780) 00:14:27.113 fused_ordering(781) 00:14:27.113 fused_ordering(782) 00:14:27.113 fused_ordering(783) 00:14:27.113 fused_ordering(784) 00:14:27.113 fused_ordering(785) 00:14:27.113 fused_ordering(786) 00:14:27.113 fused_ordering(787) 00:14:27.113 fused_ordering(788) 00:14:27.113 fused_ordering(789) 00:14:27.113 fused_ordering(790) 00:14:27.113 fused_ordering(791) 00:14:27.113 fused_ordering(792) 00:14:27.113 fused_ordering(793) 00:14:27.113 fused_ordering(794) 00:14:27.113 fused_ordering(795) 00:14:27.113 fused_ordering(796) 00:14:27.113 fused_ordering(797) 00:14:27.113 fused_ordering(798) 00:14:27.113 fused_ordering(799) 00:14:27.113 fused_ordering(800) 00:14:27.113 fused_ordering(801) 00:14:27.113 fused_ordering(802) 00:14:27.113 fused_ordering(803) 00:14:27.113 fused_ordering(804) 00:14:27.113 fused_ordering(805) 00:14:27.113 fused_ordering(806) 00:14:27.113 fused_ordering(807) 00:14:27.113 fused_ordering(808) 00:14:27.113 fused_ordering(809) 00:14:27.113 fused_ordering(810) 00:14:27.113 fused_ordering(811) 00:14:27.113 fused_ordering(812) 00:14:27.113 fused_ordering(813) 00:14:27.113 fused_ordering(814) 00:14:27.113 fused_ordering(815) 00:14:27.113 fused_ordering(816) 00:14:27.113 fused_ordering(817) 00:14:27.113 fused_ordering(818) 00:14:27.113 fused_ordering(819) 00:14:27.113 fused_ordering(820) 00:14:27.373 fused_ordering(821) 00:14:27.373 fused_ordering(822) 00:14:27.373 fused_ordering(823) 00:14:27.373 fused_ordering(824) 00:14:27.373 fused_ordering(825) 00:14:27.373 fused_ordering(826) 00:14:27.373 fused_ordering(827) 00:14:27.373 fused_ordering(828) 00:14:27.373 fused_ordering(829) 00:14:27.373 fused_ordering(830) 00:14:27.373 fused_ordering(831) 00:14:27.373 fused_ordering(832) 00:14:27.373 fused_ordering(833) 00:14:27.373 fused_ordering(834) 00:14:27.373 fused_ordering(835) 00:14:27.373 fused_ordering(836) 00:14:27.373 fused_ordering(837) 00:14:27.373 fused_ordering(838) 00:14:27.373 fused_ordering(839) 00:14:27.373 fused_ordering(840) 00:14:27.373 fused_ordering(841) 00:14:27.373 fused_ordering(842) 00:14:27.373 fused_ordering(843) 00:14:27.373 fused_ordering(844) 00:14:27.373 fused_ordering(845) 00:14:27.373 fused_ordering(846) 00:14:27.373 fused_ordering(847) 00:14:27.373 fused_ordering(848) 00:14:27.373 fused_ordering(849) 00:14:27.373 fused_ordering(850) 00:14:27.373 fused_ordering(851) 00:14:27.373 fused_ordering(852) 00:14:27.373 fused_ordering(853) 00:14:27.373 fused_ordering(854) 00:14:27.373 fused_ordering(855) 00:14:27.373 fused_ordering(856) 00:14:27.373 fused_ordering(857) 00:14:27.373 fused_ordering(858) 00:14:27.373 fused_ordering(859) 00:14:27.373 fused_ordering(860) 00:14:27.373 fused_ordering(861) 00:14:27.373 fused_ordering(862) 00:14:27.373 fused_ordering(863) 00:14:27.373 fused_ordering(864) 00:14:27.373 fused_ordering(865) 00:14:27.373 fused_ordering(866) 00:14:27.373 fused_ordering(867) 00:14:27.373 fused_ordering(868) 00:14:27.373 fused_ordering(869) 00:14:27.373 fused_ordering(870) 00:14:27.373 fused_ordering(871) 00:14:27.373 fused_ordering(872) 00:14:27.373 fused_ordering(873) 00:14:27.373 fused_ordering(874) 00:14:27.373 fused_ordering(875) 00:14:27.373 fused_ordering(876) 00:14:27.373 fused_ordering(877) 00:14:27.373 fused_ordering(878) 00:14:27.373 fused_ordering(879) 00:14:27.373 fused_ordering(880) 00:14:27.373 fused_ordering(881) 00:14:27.373 fused_ordering(882) 00:14:27.373 fused_ordering(883) 00:14:27.373 fused_ordering(884) 00:14:27.373 fused_ordering(885) 00:14:27.373 fused_ordering(886) 00:14:27.373 fused_ordering(887) 00:14:27.373 fused_ordering(888) 00:14:27.373 fused_ordering(889) 00:14:27.373 fused_ordering(890) 00:14:27.373 fused_ordering(891) 00:14:27.373 fused_ordering(892) 00:14:27.373 fused_ordering(893) 00:14:27.373 fused_ordering(894) 00:14:27.373 fused_ordering(895) 00:14:27.373 fused_ordering(896) 00:14:27.373 fused_ordering(897) 00:14:27.373 fused_ordering(898) 00:14:27.373 fused_ordering(899) 00:14:27.373 fused_ordering(900) 00:14:27.373 fused_ordering(901) 00:14:27.373 fused_ordering(902) 00:14:27.373 fused_ordering(903) 00:14:27.373 fused_ordering(904) 00:14:27.373 fused_ordering(905) 00:14:27.373 fused_ordering(906) 00:14:27.373 fused_ordering(907) 00:14:27.373 fused_ordering(908) 00:14:27.373 fused_ordering(909) 00:14:27.373 fused_ordering(910) 00:14:27.373 fused_ordering(911) 00:14:27.373 fused_ordering(912) 00:14:27.373 fused_ordering(913) 00:14:27.373 fused_ordering(914) 00:14:27.373 fused_ordering(915) 00:14:27.373 fused_ordering(916) 00:14:27.373 fused_ordering(917) 00:14:27.373 fused_ordering(918) 00:14:27.373 fused_ordering(919) 00:14:27.373 fused_ordering(920) 00:14:27.373 fused_ordering(921) 00:14:27.373 fused_ordering(922) 00:14:27.373 fused_ordering(923) 00:14:27.373 fused_ordering(924) 00:14:27.373 fused_ordering(925) 00:14:27.373 fused_ordering(926) 00:14:27.373 fused_ordering(927) 00:14:27.373 fused_ordering(928) 00:14:27.373 fused_ordering(929) 00:14:27.373 fused_ordering(930) 00:14:27.373 fused_ordering(931) 00:14:27.373 fused_ordering(932) 00:14:27.373 fused_ordering(933) 00:14:27.373 fused_ordering(934) 00:14:27.373 fused_ordering(935) 00:14:27.373 fused_ordering(936) 00:14:27.373 fused_ordering(937) 00:14:27.373 fused_ordering(938) 00:14:27.373 fused_ordering(939) 00:14:27.373 fused_ordering(940) 00:14:27.373 fused_ordering(941) 00:14:27.373 fused_ordering(942) 00:14:27.373 fused_ordering(943) 00:14:27.373 fused_ordering(944) 00:14:27.373 fused_ordering(945) 00:14:27.373 fused_ordering(946) 00:14:27.373 fused_ordering(947) 00:14:27.373 fused_ordering(948) 00:14:27.373 fused_ordering(949) 00:14:27.373 fused_ordering(950) 00:14:27.373 fused_ordering(951) 00:14:27.373 fused_ordering(952) 00:14:27.373 fused_ordering(953) 00:14:27.373 fused_ordering(954) 00:14:27.373 fused_ordering(955) 00:14:27.373 fused_ordering(956) 00:14:27.373 fused_ordering(957) 00:14:27.373 fused_ordering(958) 00:14:27.373 fused_ordering(959) 00:14:27.373 fused_ordering(960) 00:14:27.373 fused_ordering(961) 00:14:27.373 fused_ordering(962) 00:14:27.373 fused_ordering(963) 00:14:27.373 fused_ordering(964) 00:14:27.373 fused_ordering(965) 00:14:27.373 fused_ordering(966) 00:14:27.373 fused_ordering(967) 00:14:27.373 fused_ordering(968) 00:14:27.373 fused_ordering(969) 00:14:27.373 fused_ordering(970) 00:14:27.373 fused_ordering(971) 00:14:27.373 fused_ordering(972) 00:14:27.373 fused_ordering(973) 00:14:27.373 fused_ordering(974) 00:14:27.373 fused_ordering(975) 00:14:27.373 fused_ordering(976) 00:14:27.373 fused_ordering(977) 00:14:27.373 fused_ordering(978) 00:14:27.373 fused_ordering(979) 00:14:27.373 fused_ordering(980) 00:14:27.373 fused_ordering(981) 00:14:27.373 fused_ordering(982) 00:14:27.373 fused_ordering(983) 00:14:27.373 fused_ordering(984) 00:14:27.373 fused_ordering(985) 00:14:27.373 fused_ordering(986) 00:14:27.373 fused_ordering(987) 00:14:27.373 fused_ordering(988) 00:14:27.373 fused_ordering(989) 00:14:27.373 fused_ordering(990) 00:14:27.373 fused_ordering(991) 00:14:27.373 fused_ordering(992) 00:14:27.373 fused_ordering(993) 00:14:27.373 fused_ordering(994) 00:14:27.373 fused_ordering(995) 00:14:27.373 fused_ordering(996) 00:14:27.373 fused_ordering(997) 00:14:27.373 fused_ordering(998) 00:14:27.373 fused_ordering(999) 00:14:27.373 fused_ordering(1000) 00:14:27.373 fused_ordering(1001) 00:14:27.373 fused_ordering(1002) 00:14:27.373 fused_ordering(1003) 00:14:27.373 fused_ordering(1004) 00:14:27.373 fused_ordering(1005) 00:14:27.373 fused_ordering(1006) 00:14:27.373 fused_ordering(1007) 00:14:27.373 fused_ordering(1008) 00:14:27.373 fused_ordering(1009) 00:14:27.373 fused_ordering(1010) 00:14:27.373 fused_ordering(1011) 00:14:27.373 fused_ordering(1012) 00:14:27.373 fused_ordering(1013) 00:14:27.373 fused_ordering(1014) 00:14:27.373 fused_ordering(1015) 00:14:27.373 fused_ordering(1016) 00:14:27.373 fused_ordering(1017) 00:14:27.373 fused_ordering(1018) 00:14:27.373 fused_ordering(1019) 00:14:27.373 fused_ordering(1020) 00:14:27.373 fused_ordering(1021) 00:14:27.373 fused_ordering(1022) 00:14:27.373 fused_ordering(1023) 00:14:27.373 00:48:20 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:27.373 00:48:20 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:27.373 00:48:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:27.373 00:48:20 -- nvmf/common.sh@117 -- # sync 00:14:27.634 00:48:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:27.634 00:48:20 -- nvmf/common.sh@120 -- # set +e 00:14:27.634 00:48:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:27.634 00:48:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:27.634 rmmod nvme_tcp 00:14:27.634 rmmod nvme_fabrics 00:14:27.634 rmmod nvme_keyring 00:14:27.634 00:48:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:27.634 00:48:20 -- nvmf/common.sh@124 -- # set -e 00:14:27.634 00:48:20 -- nvmf/common.sh@125 -- # return 0 00:14:27.634 00:48:20 -- nvmf/common.sh@478 -- # '[' -n 2696431 ']' 00:14:27.634 00:48:20 -- nvmf/common.sh@479 -- # killprocess 2696431 00:14:27.634 00:48:20 -- common/autotest_common.sh@936 -- # '[' -z 2696431 ']' 00:14:27.634 00:48:20 -- common/autotest_common.sh@940 -- # kill -0 2696431 00:14:27.634 00:48:20 -- common/autotest_common.sh@941 -- # uname 00:14:27.634 00:48:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:27.634 00:48:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2696431 00:14:27.634 00:48:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:14:27.634 00:48:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:14:27.634 00:48:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2696431' 00:14:27.634 killing process with pid 2696431 00:14:27.634 00:48:20 -- common/autotest_common.sh@955 -- # kill 2696431 00:14:27.634 00:48:20 -- common/autotest_common.sh@960 -- # wait 2696431 00:14:28.200 00:48:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:28.200 00:48:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:28.200 00:48:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:28.200 00:48:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:28.200 00:48:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:28.200 00:48:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:28.200 00:48:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:28.200 00:48:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.101 00:48:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:30.101 00:14:30.101 real 0m10.800s 00:14:30.101 user 0m6.095s 00:14:30.101 sys 0m4.904s 00:14:30.101 00:48:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:30.101 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:14:30.101 ************************************ 00:14:30.101 END TEST nvmf_fused_ordering 00:14:30.101 ************************************ 00:14:30.101 00:48:22 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:30.101 00:48:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:30.101 00:48:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:30.101 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:14:30.361 ************************************ 00:14:30.361 START TEST nvmf_delete_subsystem 00:14:30.361 ************************************ 00:14:30.361 00:48:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:30.361 * Looking for test storage... 00:14:30.361 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:30.361 00:48:22 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:30.361 00:48:22 -- nvmf/common.sh@7 -- # uname -s 00:14:30.361 00:48:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:30.361 00:48:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:30.361 00:48:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:30.361 00:48:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:30.361 00:48:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:30.361 00:48:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:30.361 00:48:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:30.361 00:48:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:30.361 00:48:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:30.361 00:48:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:30.361 00:48:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:14:30.361 00:48:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:14:30.361 00:48:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:30.361 00:48:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:30.361 00:48:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:30.361 00:48:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:30.361 00:48:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:30.361 00:48:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:30.361 00:48:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:30.361 00:48:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:30.361 00:48:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.361 00:48:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.361 00:48:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.361 00:48:22 -- paths/export.sh@5 -- # export PATH 00:14:30.361 00:48:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:30.361 00:48:22 -- nvmf/common.sh@47 -- # : 0 00:14:30.361 00:48:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:30.361 00:48:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:30.361 00:48:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:30.361 00:48:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:30.361 00:48:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:30.361 00:48:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:30.361 00:48:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:30.361 00:48:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:30.361 00:48:22 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:30.361 00:48:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:30.361 00:48:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:30.361 00:48:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:30.362 00:48:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:30.362 00:48:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:30.362 00:48:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:30.362 00:48:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:30.362 00:48:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:30.362 00:48:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:30.362 00:48:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:30.362 00:48:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:30.362 00:48:22 -- common/autotest_common.sh@10 -- # set +x 00:14:36.934 00:48:28 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:36.934 00:48:28 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:36.934 00:48:28 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:36.934 00:48:28 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:36.934 00:48:28 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:36.934 00:48:28 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:36.934 00:48:28 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:36.934 00:48:28 -- nvmf/common.sh@295 -- # net_devs=() 00:14:36.934 00:48:28 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:36.934 00:48:28 -- nvmf/common.sh@296 -- # e810=() 00:14:36.934 00:48:28 -- nvmf/common.sh@296 -- # local -ga e810 00:14:36.934 00:48:28 -- nvmf/common.sh@297 -- # x722=() 00:14:36.934 00:48:28 -- nvmf/common.sh@297 -- # local -ga x722 00:14:36.934 00:48:28 -- nvmf/common.sh@298 -- # mlx=() 00:14:36.934 00:48:28 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:36.934 00:48:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:36.934 00:48:28 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:36.934 00:48:28 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.934 00:48:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:36.934 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:36.934 00:48:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:36.934 00:48:28 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:36.934 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:36.934 00:48:28 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.934 00:48:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.934 00:48:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.934 00:48:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:36.934 Found net devices under 0000:27:00.0: cvl_0_0 00:14:36.934 00:48:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.934 00:48:28 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:36.934 00:48:28 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:36.934 00:48:28 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:36.934 00:48:28 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:36.934 Found net devices under 0000:27:00.1: cvl_0_1 00:14:36.934 00:48:28 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:36.934 00:48:28 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:36.934 00:48:28 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:36.934 00:48:28 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:36.934 00:48:28 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:36.934 00:48:28 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:36.934 00:48:28 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:36.934 00:48:28 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:36.934 00:48:28 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:36.934 00:48:28 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:36.934 00:48:28 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:36.934 00:48:28 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:36.934 00:48:28 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:36.934 00:48:28 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:36.934 00:48:28 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:36.934 00:48:28 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:36.934 00:48:28 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:36.934 00:48:28 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:36.934 00:48:28 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:36.934 00:48:28 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:36.934 00:48:28 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:36.934 00:48:28 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:36.934 00:48:28 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:36.934 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:36.934 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:14:36.934 00:14:36.934 --- 10.0.0.2 ping statistics --- 00:14:36.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.934 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:14:36.934 00:48:28 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:36.934 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:36.934 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:14:36.934 00:14:36.934 --- 10.0.0.1 ping statistics --- 00:14:36.934 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:36.934 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:14:36.934 00:48:28 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:36.934 00:48:28 -- nvmf/common.sh@411 -- # return 0 00:14:36.934 00:48:28 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:36.934 00:48:28 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:36.934 00:48:28 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:36.934 00:48:28 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:36.934 00:48:28 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:36.934 00:48:28 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:36.934 00:48:28 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:36.934 00:48:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:36.934 00:48:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:36.934 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:14:36.934 00:48:28 -- nvmf/common.sh@470 -- # nvmfpid=2701215 00:14:36.934 00:48:28 -- nvmf/common.sh@471 -- # waitforlisten 2701215 00:14:36.934 00:48:28 -- common/autotest_common.sh@817 -- # '[' -z 2701215 ']' 00:14:36.934 00:48:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:36.934 00:48:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:36.934 00:48:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:36.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:36.934 00:48:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:36.934 00:48:28 -- common/autotest_common.sh@10 -- # set +x 00:14:36.934 00:48:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:36.934 [2024-04-27 00:48:28.723983] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:36.934 [2024-04-27 00:48:28.724115] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:36.934 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.934 [2024-04-27 00:48:28.859283] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:36.934 [2024-04-27 00:48:28.951930] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:36.934 [2024-04-27 00:48:28.951972] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:36.934 [2024-04-27 00:48:28.951982] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:36.934 [2024-04-27 00:48:28.951992] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:36.934 [2024-04-27 00:48:28.951999] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:36.935 [2024-04-27 00:48:28.952184] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.935 [2024-04-27 00:48:28.952211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:36.935 00:48:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:36.935 00:48:29 -- common/autotest_common.sh@850 -- # return 0 00:14:36.935 00:48:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:36.935 00:48:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 00:48:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 [2024-04-27 00:48:29.449941] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 [2024-04-27 00:48:29.470226] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 NULL1 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 Delay0 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:36.935 00:48:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:36.935 00:48:29 -- common/autotest_common.sh@10 -- # set +x 00:14:36.935 00:48:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@28 -- # perf_pid=2701259 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:36.935 00:48:29 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:36.935 EAL: No free 2048 kB hugepages reported on node 1 00:14:36.935 [2024-04-27 00:48:29.581360] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:38.837 00:48:31 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:38.837 00:48:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:38.837 00:48:31 -- common/autotest_common.sh@10 -- # set +x 00:14:39.406 Write completed with error (sct=0, sc=8) 00:14:39.406 Read completed with error (sct=0, sc=8) 00:14:39.406 Read completed with error (sct=0, sc=8) 00:14:39.406 Write completed with error (sct=0, sc=8) 00:14:39.406 starting I/O failed: -6 00:14:39.406 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 [2024-04-27 00:48:31.886462] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010440 is same with the state(5) to be set 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 starting I/O failed: -6 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 [2024-04-27 00:48:31.887166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002440 is same with the state(5) to be set 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Write completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.407 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Write completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Write completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 [2024-04-27 00:48:31.887716] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010040 is same with the state(5) to be set 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Write completed with error (sct=0, sc=8) 00:14:39.408 Write completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:39.408 Read completed with error (sct=0, sc=8) 00:14:40.355 [2024-04-27 00:48:32.842514] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002240 is same with the state(5) to be set 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 [2024-04-27 00:48:32.886328] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002640 is same with the state(5) to be set 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 [2024-04-27 00:48:32.886568] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010640 is same with the state(5) to be set 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 [2024-04-27 00:48:32.886999] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000002a40 is same with the state(5) to be set 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Write completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 Read completed with error (sct=0, sc=8) 00:14:40.355 [2024-04-27 00:48:32.888992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000010240 is same with the state(5) to be set 00:14:40.355 [2024-04-27 00:48:32.889860] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000002240 (9): Bad file descriptor 00:14:40.355 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:40.355 00:48:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.355 00:48:32 -- target/delete_subsystem.sh@34 -- # delay=0 00:14:40.355 00:48:32 -- target/delete_subsystem.sh@35 -- # kill -0 2701259 00:14:40.355 00:48:32 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:40.355 Initializing NVMe Controllers 00:14:40.355 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:40.355 Controller IO queue size 128, less than required. 00:14:40.355 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:40.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:40.355 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:40.355 Initialization complete. Launching workers. 00:14:40.355 ======================================================== 00:14:40.355 Latency(us) 00:14:40.355 Device Information : IOPS MiB/s Average min max 00:14:40.355 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.65 0.09 882013.80 646.28 1013068.57 00:14:40.356 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 169.20 0.08 897384.16 529.61 1013042.39 00:14:40.356 ======================================================== 00:14:40.356 Total : 345.85 0.17 889533.59 529.61 1013068.57 00:14:40.356 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@35 -- # kill -0 2701259 00:14:40.937 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2701259) - No such process 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@45 -- # NOT wait 2701259 00:14:40.937 00:48:33 -- common/autotest_common.sh@638 -- # local es=0 00:14:40.937 00:48:33 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 2701259 00:14:40.937 00:48:33 -- common/autotest_common.sh@626 -- # local arg=wait 00:14:40.937 00:48:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.937 00:48:33 -- common/autotest_common.sh@630 -- # type -t wait 00:14:40.937 00:48:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:40.937 00:48:33 -- common/autotest_common.sh@641 -- # wait 2701259 00:14:40.937 00:48:33 -- common/autotest_common.sh@641 -- # es=1 00:14:40.937 00:48:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:40.937 00:48:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:40.937 00:48:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:40.937 00:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.937 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:14:40.937 00:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.937 00:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.937 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:14:40.937 [2024-04-27 00:48:33.415897] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:40.937 00:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:40.937 00:48:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:14:40.937 00:48:33 -- common/autotest_common.sh@10 -- # set +x 00:14:40.937 00:48:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@54 -- # perf_pid=2702148 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@56 -- # delay=0 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:40.937 00:48:33 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:40.937 EAL: No free 2048 kB hugepages reported on node 1 00:14:40.937 [2024-04-27 00:48:33.524182] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:41.504 00:48:33 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.504 00:48:33 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:41.504 00:48:33 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:41.762 00:48:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:41.762 00:48:34 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:41.762 00:48:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.330 00:48:34 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.330 00:48:34 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:42.330 00:48:34 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:42.899 00:48:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:42.900 00:48:35 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:42.900 00:48:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:43.467 00:48:35 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:43.467 00:48:35 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:43.467 00:48:35 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.035 00:48:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.035 00:48:36 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:44.035 00:48:36 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:44.035 Initializing NVMe Controllers 00:14:44.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:44.035 Controller IO queue size 128, less than required. 00:14:44.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:44.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:44.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:44.035 Initialization complete. Launching workers. 00:14:44.035 ======================================================== 00:14:44.035 Latency(us) 00:14:44.035 Device Information : IOPS MiB/s Average min max 00:14:44.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003795.64 1000160.62 1042266.08 00:14:44.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004657.63 1000164.89 1011687.24 00:14:44.036 ======================================================== 00:14:44.036 Total : 256.00 0.12 1004226.63 1000160.62 1042266.08 00:14:44.036 00:14:44.294 00:48:36 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:44.294 00:48:36 -- target/delete_subsystem.sh@57 -- # kill -0 2702148 00:14:44.294 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2702148) - No such process 00:14:44.294 00:48:36 -- target/delete_subsystem.sh@67 -- # wait 2702148 00:14:44.294 00:48:36 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:44.294 00:48:36 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:44.294 00:48:36 -- nvmf/common.sh@477 -- # nvmfcleanup 00:14:44.294 00:48:36 -- nvmf/common.sh@117 -- # sync 00:14:44.294 00:48:36 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:44.294 00:48:36 -- nvmf/common.sh@120 -- # set +e 00:14:44.294 00:48:36 -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:44.294 00:48:36 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:44.294 rmmod nvme_tcp 00:14:44.294 rmmod nvme_fabrics 00:14:44.552 rmmod nvme_keyring 00:14:44.552 00:48:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:44.552 00:48:37 -- nvmf/common.sh@124 -- # set -e 00:14:44.552 00:48:37 -- nvmf/common.sh@125 -- # return 0 00:14:44.552 00:48:37 -- nvmf/common.sh@478 -- # '[' -n 2701215 ']' 00:14:44.552 00:48:37 -- nvmf/common.sh@479 -- # killprocess 2701215 00:14:44.552 00:48:37 -- common/autotest_common.sh@936 -- # '[' -z 2701215 ']' 00:14:44.552 00:48:37 -- common/autotest_common.sh@940 -- # kill -0 2701215 00:14:44.552 00:48:37 -- common/autotest_common.sh@941 -- # uname 00:14:44.552 00:48:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:44.552 00:48:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2701215 00:14:44.552 00:48:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:44.552 00:48:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:44.552 00:48:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2701215' 00:14:44.552 killing process with pid 2701215 00:14:44.552 00:48:37 -- common/autotest_common.sh@955 -- # kill 2701215 00:14:44.552 00:48:37 -- common/autotest_common.sh@960 -- # wait 2701215 00:14:45.118 00:48:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:14:45.118 00:48:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:14:45.118 00:48:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:14:45.118 00:48:37 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:45.118 00:48:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:45.118 00:48:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:45.118 00:48:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:45.118 00:48:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.023 00:48:39 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:47.023 00:14:47.023 real 0m16.742s 00:14:47.023 user 0m30.991s 00:14:47.023 sys 0m5.018s 00:14:47.023 00:48:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:14:47.023 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:14:47.023 ************************************ 00:14:47.023 END TEST nvmf_delete_subsystem 00:14:47.023 ************************************ 00:14:47.023 00:48:39 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:47.023 00:48:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:47.023 00:48:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:47.023 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:14:47.023 ************************************ 00:14:47.023 START TEST nvmf_ns_masking 00:14:47.023 ************************************ 00:14:47.023 00:48:39 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:47.284 * Looking for test storage... 00:14:47.284 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:14:47.284 00:48:39 -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:14:47.284 00:48:39 -- nvmf/common.sh@7 -- # uname -s 00:14:47.284 00:48:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:47.284 00:48:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:47.284 00:48:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:47.284 00:48:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:47.284 00:48:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:47.284 00:48:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:47.284 00:48:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:47.284 00:48:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:47.284 00:48:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:47.284 00:48:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:47.284 00:48:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:14:47.284 00:48:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:14:47.284 00:48:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:47.284 00:48:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:47.284 00:48:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:14:47.284 00:48:39 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:47.284 00:48:39 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:14:47.284 00:48:39 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:47.284 00:48:39 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:47.284 00:48:39 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:47.284 00:48:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.284 00:48:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.284 00:48:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.284 00:48:39 -- paths/export.sh@5 -- # export PATH 00:14:47.284 00:48:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:47.284 00:48:39 -- nvmf/common.sh@47 -- # : 0 00:14:47.284 00:48:39 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:47.284 00:48:39 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:47.284 00:48:39 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:47.284 00:48:39 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:47.284 00:48:39 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:47.284 00:48:39 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:47.284 00:48:39 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:47.284 00:48:39 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:47.284 00:48:39 -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:14:47.284 00:48:39 -- target/ns_masking.sh@11 -- # loops=5 00:14:47.284 00:48:39 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:47.284 00:48:39 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:14:47.284 00:48:39 -- target/ns_masking.sh@15 -- # uuidgen 00:14:47.284 00:48:39 -- target/ns_masking.sh@15 -- # HOSTID=cd5a212b-ddce-448d-8024-3943c1cbd7b8 00:14:47.284 00:48:39 -- target/ns_masking.sh@44 -- # nvmftestinit 00:14:47.284 00:48:39 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:14:47.284 00:48:39 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:47.284 00:48:39 -- nvmf/common.sh@437 -- # prepare_net_devs 00:14:47.284 00:48:39 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:14:47.284 00:48:39 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:14:47.284 00:48:39 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:47.284 00:48:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:47.284 00:48:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:47.284 00:48:39 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:14:47.284 00:48:39 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:14:47.284 00:48:39 -- nvmf/common.sh@285 -- # xtrace_disable 00:14:47.284 00:48:39 -- common/autotest_common.sh@10 -- # set +x 00:14:52.572 00:48:44 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:14:52.572 00:48:44 -- nvmf/common.sh@291 -- # pci_devs=() 00:14:52.572 00:48:44 -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:52.572 00:48:44 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:52.572 00:48:44 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:52.572 00:48:44 -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:52.572 00:48:44 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:52.572 00:48:44 -- nvmf/common.sh@295 -- # net_devs=() 00:14:52.572 00:48:44 -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:52.572 00:48:44 -- nvmf/common.sh@296 -- # e810=() 00:14:52.572 00:48:44 -- nvmf/common.sh@296 -- # local -ga e810 00:14:52.572 00:48:44 -- nvmf/common.sh@297 -- # x722=() 00:14:52.572 00:48:44 -- nvmf/common.sh@297 -- # local -ga x722 00:14:52.572 00:48:44 -- nvmf/common.sh@298 -- # mlx=() 00:14:52.572 00:48:44 -- nvmf/common.sh@298 -- # local -ga mlx 00:14:52.572 00:48:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:52.572 00:48:44 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:52.572 00:48:44 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.572 00:48:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:14:52.572 Found 0000:27:00.0 (0x8086 - 0x159b) 00:14:52.572 00:48:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:52.572 00:48:44 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:14:52.572 Found 0000:27:00.1 (0x8086 - 0x159b) 00:14:52.572 00:48:44 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.572 00:48:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.572 00:48:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.572 00:48:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:14:52.572 Found net devices under 0000:27:00.0: cvl_0_0 00:14:52.572 00:48:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.572 00:48:44 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:52.572 00:48:44 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:52.572 00:48:44 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:52.572 00:48:44 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:14:52.572 Found net devices under 0000:27:00.1: cvl_0_1 00:14:52.572 00:48:44 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:14:52.572 00:48:44 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@403 -- # is_hw=yes 00:14:52.572 00:48:44 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:14:52.572 00:48:44 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:14:52.572 00:48:44 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:52.572 00:48:44 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:52.572 00:48:44 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:52.572 00:48:44 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:52.572 00:48:44 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:52.572 00:48:44 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:52.572 00:48:44 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:52.572 00:48:44 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:52.572 00:48:44 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:52.572 00:48:44 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:52.572 00:48:44 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:52.572 00:48:44 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:52.572 00:48:44 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:52.572 00:48:44 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:52.572 00:48:44 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:52.572 00:48:44 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:52.572 00:48:44 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:52.572 00:48:45 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:52.572 00:48:45 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:52.572 00:48:45 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:52.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:52.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.321 ms 00:14:52.572 00:14:52.572 --- 10.0.0.2 ping statistics --- 00:14:52.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.572 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:14:52.572 00:48:45 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:52.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:52.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:14:52.572 00:14:52.572 --- 10.0.0.1 ping statistics --- 00:14:52.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:52.572 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:14:52.572 00:48:45 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:52.572 00:48:45 -- nvmf/common.sh@411 -- # return 0 00:14:52.572 00:48:45 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:14:52.572 00:48:45 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:52.572 00:48:45 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:14:52.572 00:48:45 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:14:52.572 00:48:45 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:52.572 00:48:45 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:14:52.572 00:48:45 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:14:52.572 00:48:45 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:14:52.572 00:48:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:14:52.572 00:48:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:14:52.572 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:52.572 00:48:45 -- nvmf/common.sh@470 -- # nvmfpid=2706743 00:14:52.572 00:48:45 -- nvmf/common.sh@471 -- # waitforlisten 2706743 00:14:52.572 00:48:45 -- common/autotest_common.sh@817 -- # '[' -z 2706743 ']' 00:14:52.572 00:48:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:52.572 00:48:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:14:52.572 00:48:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:52.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:52.572 00:48:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:14:52.572 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:52.572 00:48:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:52.572 [2024-04-27 00:48:45.150357] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:14:52.572 [2024-04-27 00:48:45.150463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:52.572 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.830 [2024-04-27 00:48:45.279676] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:52.830 [2024-04-27 00:48:45.377481] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:52.830 [2024-04-27 00:48:45.377522] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:52.830 [2024-04-27 00:48:45.377535] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:52.830 [2024-04-27 00:48:45.377546] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:52.830 [2024-04-27 00:48:45.377554] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:52.830 [2024-04-27 00:48:45.377660] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:14:52.830 [2024-04-27 00:48:45.377691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:14:52.830 [2024-04-27 00:48:45.377792] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:14:52.830 [2024-04-27 00:48:45.377802] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.395 00:48:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:14:53.395 00:48:45 -- common/autotest_common.sh@850 -- # return 0 00:14:53.395 00:48:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:14:53.395 00:48:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:14:53.395 00:48:45 -- common/autotest_common.sh@10 -- # set +x 00:14:53.395 00:48:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:53.395 00:48:45 -- target/ns_masking.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:53.395 [2024-04-27 00:48:45.991311] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:53.395 00:48:46 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:14:53.395 00:48:46 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:14:53.395 00:48:46 -- target/ns_masking.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:53.654 Malloc1 00:14:53.654 00:48:46 -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:53.655 Malloc2 00:14:53.915 00:48:46 -- target/ns_masking.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:53.915 00:48:46 -- target/ns_masking.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:54.173 00:48:46 -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.173 [2024-04-27 00:48:46.808790] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.173 00:48:46 -- target/ns_masking.sh@61 -- # connect 00:14:54.173 00:48:46 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd5a212b-ddce-448d-8024-3943c1cbd7b8 -a 10.0.0.2 -s 4420 -i 4 00:14:54.430 00:48:47 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.430 00:48:47 -- common/autotest_common.sh@1184 -- # local i=0 00:14:54.430 00:48:47 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.430 00:48:47 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:14:54.430 00:48:47 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:56.333 00:48:49 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:56.333 00:48:49 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:56.333 00:48:49 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.333 00:48:49 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:56.333 00:48:49 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.333 00:48:49 -- common/autotest_common.sh@1194 -- # return 0 00:14:56.591 00:48:49 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:56.592 00:48:49 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:56.592 00:48:49 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:56.592 00:48:49 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:56.592 00:48:49 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:14:56.592 00:48:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.592 00:48:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.592 [ 0]:0x1 00:14:56.592 00:48:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.592 00:48:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.592 00:48:49 -- target/ns_masking.sh@40 -- # nguid=ad3d79abd6c44a4585753829426eb323 00:14:56.592 00:48:49 -- target/ns_masking.sh@41 -- # [[ ad3d79abd6c44a4585753829426eb323 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.592 00:48:49 -- target/ns_masking.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:56.851 00:48:49 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:14:56.851 00:48:49 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:56.851 00:48:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.851 [ 0]:0x1 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # nguid=ad3d79abd6c44a4585753829426eb323 00:14:56.851 00:48:49 -- target/ns_masking.sh@41 -- # [[ ad3d79abd6c44a4585753829426eb323 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.851 00:48:49 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:14:56.851 00:48:49 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:56.851 00:48:49 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:56.851 [ 1]:0x2 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:56.851 00:48:49 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:14:56.851 00:48:49 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:56.852 00:48:49 -- target/ns_masking.sh@69 -- # disconnect 00:14:56.852 00:48:49 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.110 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.110 00:48:49 -- target/ns_masking.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:57.370 00:48:49 -- target/ns_masking.sh@74 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:57.370 00:48:49 -- target/ns_masking.sh@77 -- # connect 1 00:14:57.370 00:48:49 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd5a212b-ddce-448d-8024-3943c1cbd7b8 -a 10.0.0.2 -s 4420 -i 4 00:14:57.627 00:48:50 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:57.627 00:48:50 -- common/autotest_common.sh@1184 -- # local i=0 00:14:57.627 00:48:50 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.627 00:48:50 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:14:57.627 00:48:50 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:14:57.627 00:48:50 -- common/autotest_common.sh@1191 -- # sleep 2 00:14:59.531 00:48:52 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:14:59.531 00:48:52 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:14:59.531 00:48:52 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.531 00:48:52 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:14:59.531 00:48:52 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.531 00:48:52 -- common/autotest_common.sh@1194 -- # return 0 00:14:59.531 00:48:52 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:14:59.531 00:48:52 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:59.531 00:48:52 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:14:59.531 00:48:52 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:14:59.531 00:48:52 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:14:59.531 00:48:52 -- common/autotest_common.sh@638 -- # local es=0 00:14:59.531 00:48:52 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:14:59.531 00:48:52 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:14:59.531 00:48:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:59.531 00:48:52 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:14:59.531 00:48:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:14:59.531 00:48:52 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:14:59.531 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.531 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.531 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.531 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.531 00:48:52 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:14:59.531 00:48:52 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.531 00:48:52 -- common/autotest_common.sh@641 -- # es=1 00:14:59.531 00:48:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:14:59.531 00:48:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:14:59.531 00:48:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:14:59.531 00:48:52 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:14:59.531 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:59.531 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.531 [ 0]:0x2 00:14:59.532 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.532 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.532 00:48:52 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:14:59.532 00:48:52 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.532 00:48:52 -- target/ns_masking.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:59.790 00:48:52 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:14:59.790 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.790 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:14:59.790 [ 0]:0x1 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # nguid=ad3d79abd6c44a4585753829426eb323 00:14:59.790 00:48:52 -- target/ns_masking.sh@41 -- # [[ ad3d79abd6c44a4585753829426eb323 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.790 00:48:52 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:14:59.790 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:14:59.790 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x2 00:14:59.790 [ 1]:0x2 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:14:59.790 00:48:52 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:14:59.790 00:48:52 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:59.790 00:48:52 -- target/ns_masking.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.050 00:48:52 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:15:00.050 00:48:52 -- common/autotest_common.sh@638 -- # local es=0 00:15:00.050 00:48:52 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:00.050 00:48:52 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:00.050 00:48:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.050 00:48:52 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:00.050 00:48:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:00.050 00:48:52 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:00.050 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.050 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:00.050 00:48:52 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.050 00:48:52 -- common/autotest_common.sh@641 -- # es=1 00:15:00.050 00:48:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:00.050 00:48:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:00.050 00:48:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:00.050 00:48:52 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:15:00.050 00:48:52 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:00.050 00:48:52 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:00.050 [ 0]:0x2 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:00.050 00:48:52 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:15:00.050 00:48:52 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:00.050 00:48:52 -- target/ns_masking.sh@91 -- # disconnect 00:15:00.050 00:48:52 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.050 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.050 00:48:52 -- target/ns_masking.sh@94 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:00.311 00:48:52 -- target/ns_masking.sh@95 -- # connect 2 00:15:00.311 00:48:52 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I cd5a212b-ddce-448d-8024-3943c1cbd7b8 -a 10.0.0.2 -s 4420 -i 4 00:15:00.569 00:48:53 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:00.569 00:48:53 -- common/autotest_common.sh@1184 -- # local i=0 00:15:00.569 00:48:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.569 00:48:53 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:15:00.569 00:48:53 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:15:00.569 00:48:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:15:02.475 00:48:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:15:02.475 00:48:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:15:02.475 00:48:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.475 00:48:55 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:15:02.475 00:48:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.475 00:48:55 -- common/autotest_common.sh@1194 -- # return 0 00:15:02.475 00:48:55 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:15:02.475 00:48:55 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:02.475 00:48:55 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:15:02.475 00:48:55 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:15:02.475 00:48:55 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:15:02.475 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.475 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:02.475 [ 0]:0x1 00:15:02.475 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.475 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:02.475 00:48:55 -- target/ns_masking.sh@40 -- # nguid=ad3d79abd6c44a4585753829426eb323 00:15:02.475 00:48:55 -- target/ns_masking.sh@41 -- # [[ ad3d79abd6c44a4585753829426eb323 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.475 00:48:55 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:15:02.475 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.475 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:02.475 [ 1]:0x2 00:15:02.475 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:02.475 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.734 00:48:55 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:15:02.734 00:48:55 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.734 00:48:55 -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:02.734 00:48:55 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:15:02.734 00:48:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.734 00:48:55 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.734 00:48:55 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:02.734 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.734 00:48:55 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:02.734 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.734 00:48:55 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:02.734 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.735 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:02.735 00:48:55 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.735 00:48:55 -- common/autotest_common.sh@641 -- # es=1 00:15:02.735 00:48:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.735 00:48:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.735 00:48:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.735 00:48:55 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:15:02.735 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.735 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:02.735 [ 0]:0x2 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:02.735 00:48:55 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:15:02.735 00:48:55 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.735 00:48:55 -- target/ns_masking.sh@105 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.735 00:48:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.735 00:48:55 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.735 00:48:55 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:02.735 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.735 00:48:55 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:02.735 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.735 00:48:55 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:02.735 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.735 00:48:55 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:02.735 00:48:55 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:15:02.735 00:48:55 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:02.993 [2024-04-27 00:48:55.533945] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:02.993 request: 00:15:02.993 { 00:15:02.993 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:02.993 "nsid": 2, 00:15:02.993 "host": "nqn.2016-06.io.spdk:host1", 00:15:02.993 "method": "nvmf_ns_remove_host", 00:15:02.993 "req_id": 1 00:15:02.993 } 00:15:02.993 Got JSON-RPC error response 00:15:02.993 response: 00:15:02.993 { 00:15:02.993 "code": -32602, 00:15:02.993 "message": "Invalid parameters" 00:15:02.993 } 00:15:02.993 00:48:55 -- common/autotest_common.sh@641 -- # es=1 00:15:02.993 00:48:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.993 00:48:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.993 00:48:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.993 00:48:55 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:15:02.993 00:48:55 -- common/autotest_common.sh@638 -- # local es=0 00:15:02.993 00:48:55 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:15:02.993 00:48:55 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:15:02.993 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.993 00:48:55 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:15:02.993 00:48:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:15:02.993 00:48:55 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:15:02.993 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.993 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x1 00:15:02.993 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:02.993 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:02.993 00:48:55 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:15:02.993 00:48:55 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:02.993 00:48:55 -- common/autotest_common.sh@641 -- # es=1 00:15:02.993 00:48:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:15:02.993 00:48:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:15:02.993 00:48:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:15:02.993 00:48:55 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:15:02.993 00:48:55 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:15:02.993 00:48:55 -- target/ns_masking.sh@39 -- # grep 0x2 00:15:02.993 [ 0]:0x2 00:15:02.993 00:48:55 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:02.993 00:48:55 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:15:03.252 00:48:55 -- target/ns_masking.sh@40 -- # nguid=06e7ab733393467c9d088d61f2de3293 00:15:03.252 00:48:55 -- target/ns_masking.sh@41 -- # [[ 06e7ab733393467c9d088d61f2de3293 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:03.252 00:48:55 -- target/ns_masking.sh@108 -- # disconnect 00:15:03.252 00:48:55 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:03.252 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:03.252 00:48:55 -- target/ns_masking.sh@110 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:03.512 00:48:55 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:15:03.512 00:48:55 -- target/ns_masking.sh@114 -- # nvmftestfini 00:15:03.512 00:48:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:03.512 00:48:55 -- nvmf/common.sh@117 -- # sync 00:15:03.512 00:48:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:03.512 00:48:55 -- nvmf/common.sh@120 -- # set +e 00:15:03.512 00:48:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:03.512 00:48:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:03.512 rmmod nvme_tcp 00:15:03.512 rmmod nvme_fabrics 00:15:03.512 rmmod nvme_keyring 00:15:03.512 00:48:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:03.512 00:48:56 -- nvmf/common.sh@124 -- # set -e 00:15:03.512 00:48:56 -- nvmf/common.sh@125 -- # return 0 00:15:03.512 00:48:56 -- nvmf/common.sh@478 -- # '[' -n 2706743 ']' 00:15:03.512 00:48:56 -- nvmf/common.sh@479 -- # killprocess 2706743 00:15:03.512 00:48:56 -- common/autotest_common.sh@936 -- # '[' -z 2706743 ']' 00:15:03.512 00:48:56 -- common/autotest_common.sh@940 -- # kill -0 2706743 00:15:03.512 00:48:56 -- common/autotest_common.sh@941 -- # uname 00:15:03.512 00:48:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:03.512 00:48:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2706743 00:15:03.512 00:48:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:03.512 00:48:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:03.512 00:48:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2706743' 00:15:03.512 killing process with pid 2706743 00:15:03.512 00:48:56 -- common/autotest_common.sh@955 -- # kill 2706743 00:15:03.512 00:48:56 -- common/autotest_common.sh@960 -- # wait 2706743 00:15:04.081 00:48:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:04.081 00:48:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:04.081 00:48:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:04.081 00:48:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:04.081 00:48:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:04.081 00:48:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:04.081 00:48:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:04.081 00:48:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.625 00:48:58 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:06.625 00:15:06.625 real 0m19.026s 00:15:06.625 user 0m48.234s 00:15:06.625 sys 0m5.225s 00:15:06.625 00:48:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:06.625 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.625 ************************************ 00:15:06.625 END TEST nvmf_ns_masking 00:15:06.625 ************************************ 00:15:06.625 00:48:58 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:15:06.625 00:48:58 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:15:06.625 00:48:58 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:06.625 00:48:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:06.625 00:48:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:06.625 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:06.625 ************************************ 00:15:06.625 START TEST nvmf_host_management 00:15:06.625 ************************************ 00:15:06.625 00:48:58 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:15:06.625 * Looking for test storage... 00:15:06.625 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:06.625 00:48:58 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:06.625 00:48:58 -- nvmf/common.sh@7 -- # uname -s 00:15:06.625 00:48:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:06.625 00:48:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:06.625 00:48:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:06.625 00:48:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:06.625 00:48:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:06.625 00:48:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:06.625 00:48:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:06.625 00:48:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:06.625 00:48:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:06.625 00:48:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:06.625 00:48:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:15:06.625 00:48:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:15:06.625 00:48:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:06.625 00:48:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:06.625 00:48:58 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:06.625 00:48:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:06.625 00:48:58 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:06.625 00:48:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:06.625 00:48:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:06.625 00:48:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:06.625 00:48:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.625 00:48:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.625 00:48:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.625 00:48:58 -- paths/export.sh@5 -- # export PATH 00:15:06.625 00:48:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:06.625 00:48:58 -- nvmf/common.sh@47 -- # : 0 00:15:06.625 00:48:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:06.625 00:48:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:06.625 00:48:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:06.625 00:48:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:06.625 00:48:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:06.625 00:48:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:06.625 00:48:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:06.625 00:48:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:06.625 00:48:58 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:06.625 00:48:58 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:06.625 00:48:58 -- target/host_management.sh@105 -- # nvmftestinit 00:15:06.625 00:48:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:06.625 00:48:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:06.625 00:48:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:06.625 00:48:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:06.625 00:48:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:06.625 00:48:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:06.625 00:48:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:06.625 00:48:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:06.625 00:48:58 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:06.625 00:48:58 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:06.625 00:48:58 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:06.625 00:48:58 -- common/autotest_common.sh@10 -- # set +x 00:15:11.909 00:49:04 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:11.909 00:49:04 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.909 00:49:04 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.909 00:49:04 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.909 00:49:04 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.909 00:49:04 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.909 00:49:04 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.909 00:49:04 -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.909 00:49:04 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.909 00:49:04 -- nvmf/common.sh@296 -- # e810=() 00:15:11.909 00:49:04 -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.909 00:49:04 -- nvmf/common.sh@297 -- # x722=() 00:15:11.909 00:49:04 -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.909 00:49:04 -- nvmf/common.sh@298 -- # mlx=() 00:15:11.909 00:49:04 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.909 00:49:04 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.909 00:49:04 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.909 00:49:04 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.909 00:49:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:11.909 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:11.909 00:49:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.909 00:49:04 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:11.909 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:11.909 00:49:04 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.909 00:49:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.909 00:49:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.909 00:49:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:11.909 Found net devices under 0000:27:00.0: cvl_0_0 00:15:11.909 00:49:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.909 00:49:04 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.909 00:49:04 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.909 00:49:04 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.909 00:49:04 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:11.909 Found net devices under 0000:27:00.1: cvl_0_1 00:15:11.909 00:49:04 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.909 00:49:04 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:11.909 00:49:04 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:11.909 00:49:04 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:11.909 00:49:04 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.909 00:49:04 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.909 00:49:04 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.909 00:49:04 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.909 00:49:04 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.909 00:49:04 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.909 00:49:04 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.909 00:49:04 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.909 00:49:04 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.909 00:49:04 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.909 00:49:04 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.909 00:49:04 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.909 00:49:04 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:12.169 00:49:04 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:12.169 00:49:04 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:12.169 00:49:04 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:12.169 00:49:04 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:12.169 00:49:04 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:12.169 00:49:04 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:12.169 00:49:04 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:12.169 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:12.169 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:15:12.169 00:15:12.169 --- 10.0.0.2 ping statistics --- 00:15:12.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.169 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:15:12.169 00:49:04 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:12.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:12.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.073 ms 00:15:12.169 00:15:12.169 --- 10.0.0.1 ping statistics --- 00:15:12.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:12.169 rtt min/avg/max/mdev = 0.073/0.073/0.073/0.000 ms 00:15:12.169 00:49:04 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:12.169 00:49:04 -- nvmf/common.sh@411 -- # return 0 00:15:12.169 00:49:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:12.169 00:49:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:12.169 00:49:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:12.169 00:49:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:12.169 00:49:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:12.169 00:49:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:12.169 00:49:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:12.169 00:49:04 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:15:12.169 00:49:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:12.169 00:49:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:12.169 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.169 ************************************ 00:15:12.169 START TEST nvmf_host_management 00:15:12.169 ************************************ 00:15:12.169 00:49:04 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:15:12.169 00:49:04 -- target/host_management.sh@69 -- # starttarget 00:15:12.169 00:49:04 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:15:12.169 00:49:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:12.169 00:49:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:12.169 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.430 00:49:04 -- nvmf/common.sh@470 -- # nvmfpid=2713102 00:15:12.430 00:49:04 -- nvmf/common.sh@471 -- # waitforlisten 2713102 00:15:12.430 00:49:04 -- common/autotest_common.sh@817 -- # '[' -z 2713102 ']' 00:15:12.430 00:49:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:12.430 00:49:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:12.430 00:49:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:12.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:12.430 00:49:04 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:15:12.430 00:49:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:12.430 00:49:04 -- common/autotest_common.sh@10 -- # set +x 00:15:12.430 [2024-04-27 00:49:04.967073] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:12.430 [2024-04-27 00:49:04.967205] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:12.430 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.430 [2024-04-27 00:49:05.112335] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:12.691 [2024-04-27 00:49:05.207636] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:12.691 [2024-04-27 00:49:05.207682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:12.691 [2024-04-27 00:49:05.207694] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:12.691 [2024-04-27 00:49:05.207704] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:12.691 [2024-04-27 00:49:05.207712] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:12.691 [2024-04-27 00:49:05.207797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:12.691 [2024-04-27 00:49:05.207911] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:12.691 [2024-04-27 00:49:05.208034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:12.691 [2024-04-27 00:49:05.208064] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:15:13.274 00:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:13.274 00:49:05 -- common/autotest_common.sh@850 -- # return 0 00:15:13.274 00:49:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:13.274 00:49:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:13.274 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 00:49:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:13.274 00:49:05 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:13.274 00:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.274 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 [2024-04-27 00:49:05.722808] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:13.274 00:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.274 00:49:05 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:15:13.274 00:49:05 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:13.274 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 00:49:05 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:13.274 00:49:05 -- target/host_management.sh@23 -- # cat 00:15:13.274 00:49:05 -- target/host_management.sh@30 -- # rpc_cmd 00:15:13.274 00:49:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:13.274 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.274 Malloc0 00:15:13.275 [2024-04-27 00:49:05.804129] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.275 00:49:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:13.275 00:49:05 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:15:13.275 00:49:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:13.275 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.275 00:49:05 -- target/host_management.sh@73 -- # perfpid=2713432 00:15:13.275 00:49:05 -- target/host_management.sh@74 -- # waitforlisten 2713432 /var/tmp/bdevperf.sock 00:15:13.275 00:49:05 -- common/autotest_common.sh@817 -- # '[' -z 2713432 ']' 00:15:13.275 00:49:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.275 00:49:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:13.275 00:49:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.275 00:49:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:13.275 00:49:05 -- common/autotest_common.sh@10 -- # set +x 00:15:13.275 00:49:05 -- target/host_management.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:15:13.275 00:49:05 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:15:13.275 00:49:05 -- nvmf/common.sh@521 -- # config=() 00:15:13.275 00:49:05 -- nvmf/common.sh@521 -- # local subsystem config 00:15:13.275 00:49:05 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:13.275 00:49:05 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:13.275 { 00:15:13.275 "params": { 00:15:13.275 "name": "Nvme$subsystem", 00:15:13.275 "trtype": "$TEST_TRANSPORT", 00:15:13.275 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:13.275 "adrfam": "ipv4", 00:15:13.275 "trsvcid": "$NVMF_PORT", 00:15:13.275 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:13.275 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:13.275 "hdgst": ${hdgst:-false}, 00:15:13.275 "ddgst": ${ddgst:-false} 00:15:13.275 }, 00:15:13.275 "method": "bdev_nvme_attach_controller" 00:15:13.275 } 00:15:13.275 EOF 00:15:13.275 )") 00:15:13.275 00:49:05 -- nvmf/common.sh@543 -- # cat 00:15:13.275 00:49:05 -- nvmf/common.sh@545 -- # jq . 00:15:13.275 00:49:05 -- nvmf/common.sh@546 -- # IFS=, 00:15:13.275 00:49:05 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:13.275 "params": { 00:15:13.275 "name": "Nvme0", 00:15:13.275 "trtype": "tcp", 00:15:13.275 "traddr": "10.0.0.2", 00:15:13.275 "adrfam": "ipv4", 00:15:13.275 "trsvcid": "4420", 00:15:13.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:13.275 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:13.275 "hdgst": false, 00:15:13.275 "ddgst": false 00:15:13.275 }, 00:15:13.275 "method": "bdev_nvme_attach_controller" 00:15:13.275 }' 00:15:13.275 [2024-04-27 00:49:05.937882] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:13.275 [2024-04-27 00:49:05.938025] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713432 ] 00:15:13.561 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.561 [2024-04-27 00:49:06.066838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:13.561 [2024-04-27 00:49:06.157196] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:13.821 Running I/O for 10 seconds... 00:15:14.085 00:49:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:14.085 00:49:06 -- common/autotest_common.sh@850 -- # return 0 00:15:14.085 00:49:06 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:15:14.085 00:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.085 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:15:14.085 00:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.085 00:49:06 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:14.085 00:49:06 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:15:14.085 00:49:06 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:15:14.085 00:49:06 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:15:14.085 00:49:06 -- target/host_management.sh@52 -- # local ret=1 00:15:14.085 00:49:06 -- target/host_management.sh@53 -- # local i 00:15:14.085 00:49:06 -- target/host_management.sh@54 -- # (( i = 10 )) 00:15:14.085 00:49:06 -- target/host_management.sh@54 -- # (( i != 0 )) 00:15:14.085 00:49:06 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:15:14.085 00:49:06 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:15:14.085 00:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.085 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:15:14.085 00:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.085 00:49:06 -- target/host_management.sh@55 -- # read_io_count=334 00:15:14.085 00:49:06 -- target/host_management.sh@58 -- # '[' 334 -ge 100 ']' 00:15:14.085 00:49:06 -- target/host_management.sh@59 -- # ret=0 00:15:14.085 00:49:06 -- target/host_management.sh@60 -- # break 00:15:14.085 00:49:06 -- target/host_management.sh@64 -- # return 0 00:15:14.085 00:49:06 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:14.085 00:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.085 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:15:14.085 [2024-04-27 00:49:06.713312] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713380] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713391] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713399] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713407] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713415] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713431] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713441] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713449] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713456] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713463] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713471] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713479] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713486] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.085 [2024-04-27 00:49:06.713494] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713501] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713509] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713516] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713524] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713531] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713539] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713546] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713554] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713561] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713568] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713575] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713581] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713588] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713595] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713602] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713609] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713616] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713625] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713632] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713639] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713646] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713654] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713661] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713668] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713675] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713683] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713690] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713697] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713704] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713711] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713718] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713724] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713732] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713739] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713746] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713753] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713760] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713767] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713774] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713781] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713788] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713795] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713803] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713815] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713823] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713830] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713837] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.713844] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002480 is same with the state(5) to be set 00:15:14.086 [2024-04-27 00:49:06.714004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:49152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:49280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:49408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:49536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:49664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:49792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:49920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:50048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:50176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.086 [2024-04-27 00:49:06.714243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:50304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.086 [2024-04-27 00:49:06.714251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:50432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:50560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:50688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:50816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:50944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:51072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:51200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:51328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:51584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:51712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:51840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:51968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:52096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:52224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:52352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:52480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:52608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:52736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:52864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:52992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:53376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:53504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:53632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:53760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:53888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:54016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:54784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:55296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.714985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.714992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:55936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:56064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:56192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:56320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:56448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:56576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:56704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:56960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:57088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:57216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:15:14.087 [2024-04-27 00:49:06.715223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:14.087 [2024-04-27 00:49:06.715234] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000007e40 is same with the state(5) to be set 00:15:14.087 [2024-04-27 00:49:06.715386] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007e40 was disconnected and freed. reset controller. 00:15:14.087 [2024-04-27 00:49:06.716317] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:15:14.087 task offset: 49152 on job bdev=Nvme0n1 fails 00:15:14.087 00:15:14.087 Latency(us) 00:15:14.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:14.087 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:14.087 Job: Nvme0n1 ended in about 0.21 seconds with error 00:15:14.087 Verification LBA range: start 0x0 length 0x400 00:15:14.087 Nvme0n1 : 0.21 1846.91 115.43 307.82 0.00 28572.56 7657.36 25386.58 00:15:14.087 =================================================================================================================== 00:15:14.087 Total : 1846.91 115.43 307.82 0.00 28572.56 7657.36 25386.58 00:15:14.087 00:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.087 00:49:06 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:15:14.087 00:49:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:15:14.087 00:49:06 -- common/autotest_common.sh@10 -- # set +x 00:15:14.087 [2024-04-27 00:49:06.718855] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:15:14.087 [2024-04-27 00:49:06.718895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:15:14.087 00:49:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:15:14.087 00:49:06 -- target/host_management.sh@87 -- # sleep 1 00:15:14.087 [2024-04-27 00:49:06.768748] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:15:15.480 00:49:07 -- target/host_management.sh@91 -- # kill -9 2713432 00:15:15.480 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2713432) - No such process 00:15:15.480 00:49:07 -- target/host_management.sh@91 -- # true 00:15:15.481 00:49:07 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:15:15.481 00:49:07 -- target/host_management.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:15:15.481 00:49:07 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:15:15.481 00:49:07 -- nvmf/common.sh@521 -- # config=() 00:15:15.481 00:49:07 -- nvmf/common.sh@521 -- # local subsystem config 00:15:15.481 00:49:07 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:15:15.481 00:49:07 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:15:15.481 { 00:15:15.481 "params": { 00:15:15.481 "name": "Nvme$subsystem", 00:15:15.481 "trtype": "$TEST_TRANSPORT", 00:15:15.481 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:15.481 "adrfam": "ipv4", 00:15:15.481 "trsvcid": "$NVMF_PORT", 00:15:15.481 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:15.481 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:15.481 "hdgst": ${hdgst:-false}, 00:15:15.481 "ddgst": ${ddgst:-false} 00:15:15.481 }, 00:15:15.481 "method": "bdev_nvme_attach_controller" 00:15:15.481 } 00:15:15.481 EOF 00:15:15.481 )") 00:15:15.481 00:49:07 -- nvmf/common.sh@543 -- # cat 00:15:15.481 00:49:07 -- nvmf/common.sh@545 -- # jq . 00:15:15.481 00:49:07 -- nvmf/common.sh@546 -- # IFS=, 00:15:15.481 00:49:07 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:15:15.481 "params": { 00:15:15.481 "name": "Nvme0", 00:15:15.481 "trtype": "tcp", 00:15:15.481 "traddr": "10.0.0.2", 00:15:15.481 "adrfam": "ipv4", 00:15:15.481 "trsvcid": "4420", 00:15:15.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.481 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:15:15.481 "hdgst": false, 00:15:15.481 "ddgst": false 00:15:15.481 }, 00:15:15.481 "method": "bdev_nvme_attach_controller" 00:15:15.481 }' 00:15:15.481 [2024-04-27 00:49:07.817676] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:15.481 [2024-04-27 00:49:07.817825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2713756 ] 00:15:15.481 EAL: No free 2048 kB hugepages reported on node 1 00:15:15.481 [2024-04-27 00:49:07.948794] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:15.481 [2024-04-27 00:49:08.039440] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:15.743 Running I/O for 1 seconds... 00:15:16.683 00:15:16.683 Latency(us) 00:15:16.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:15:16.683 Verification LBA range: start 0x0 length 0x400 00:15:16.683 Nvme0n1 : 1.01 2285.30 142.83 0.00 0.00 27593.79 4932.45 24696.72 00:15:16.683 =================================================================================================================== 00:15:16.683 Total : 2285.30 142.83 0.00 0.00 27593.79 4932.45 24696.72 00:15:17.254 00:49:09 -- target/host_management.sh@102 -- # stoptarget 00:15:17.254 00:49:09 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:15:17.254 00:49:09 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:15:17.254 00:49:09 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:15:17.254 00:49:09 -- target/host_management.sh@40 -- # nvmftestfini 00:15:17.254 00:49:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:17.254 00:49:09 -- nvmf/common.sh@117 -- # sync 00:15:17.254 00:49:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:17.254 00:49:09 -- nvmf/common.sh@120 -- # set +e 00:15:17.254 00:49:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:17.255 00:49:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:17.255 rmmod nvme_tcp 00:15:17.255 rmmod nvme_fabrics 00:15:17.255 rmmod nvme_keyring 00:15:17.255 00:49:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:17.255 00:49:09 -- nvmf/common.sh@124 -- # set -e 00:15:17.255 00:49:09 -- nvmf/common.sh@125 -- # return 0 00:15:17.255 00:49:09 -- nvmf/common.sh@478 -- # '[' -n 2713102 ']' 00:15:17.255 00:49:09 -- nvmf/common.sh@479 -- # killprocess 2713102 00:15:17.255 00:49:09 -- common/autotest_common.sh@936 -- # '[' -z 2713102 ']' 00:15:17.255 00:49:09 -- common/autotest_common.sh@940 -- # kill -0 2713102 00:15:17.255 00:49:09 -- common/autotest_common.sh@941 -- # uname 00:15:17.255 00:49:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:17.255 00:49:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2713102 00:15:17.255 00:49:09 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:15:17.255 00:49:09 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:15:17.255 00:49:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2713102' 00:15:17.255 killing process with pid 2713102 00:15:17.255 00:49:09 -- common/autotest_common.sh@955 -- # kill 2713102 00:15:17.255 00:49:09 -- common/autotest_common.sh@960 -- # wait 2713102 00:15:17.822 [2024-04-27 00:49:10.288136] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:15:17.822 00:49:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:17.822 00:49:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:17.822 00:49:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:17.822 00:49:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:17.822 00:49:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:17.822 00:49:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.822 00:49:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:17.822 00:49:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.729 00:49:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:19.729 00:15:19.729 real 0m7.543s 00:15:19.729 user 0m22.787s 00:15:19.729 sys 0m1.295s 00:15:19.729 00:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.729 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.729 ************************************ 00:15:19.729 END TEST nvmf_host_management 00:15:19.729 ************************************ 00:15:19.989 00:49:12 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:15:19.989 00:15:19.989 real 0m13.557s 00:15:19.989 user 0m24.415s 00:15:19.989 sys 0m5.650s 00:15:19.989 00:49:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:19.989 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.989 ************************************ 00:15:19.989 END TEST nvmf_host_management 00:15:19.989 ************************************ 00:15:19.989 00:49:12 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:19.989 00:49:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:19.989 00:49:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:19.989 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:19.989 ************************************ 00:15:19.989 START TEST nvmf_lvol 00:15:19.989 ************************************ 00:15:19.989 00:49:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:15:19.989 * Looking for test storage... 00:15:19.989 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.989 00:49:12 -- nvmf/common.sh@7 -- # uname -s 00:15:19.989 00:49:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.989 00:49:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.989 00:49:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.989 00:49:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.989 00:49:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:19.989 00:49:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:19.989 00:49:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.989 00:49:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:19.989 00:49:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.989 00:49:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:19.989 00:49:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:15:19.989 00:49:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:15:19.989 00:49:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.989 00:49:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:19.989 00:49:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:19.989 00:49:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.989 00:49:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:19.989 00:49:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.989 00:49:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.989 00:49:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.989 00:49:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.989 00:49:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.989 00:49:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.989 00:49:12 -- paths/export.sh@5 -- # export PATH 00:15:19.989 00:49:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.989 00:49:12 -- nvmf/common.sh@47 -- # : 0 00:15:19.989 00:49:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:19.989 00:49:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:19.989 00:49:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:19.989 00:49:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.989 00:49:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.989 00:49:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:19.989 00:49:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:19.989 00:49:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:19.989 00:49:12 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:15:19.989 00:49:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:19.989 00:49:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.989 00:49:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:19.989 00:49:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:19.989 00:49:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:19.989 00:49:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:19.989 00:49:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:19.989 00:49:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:19.989 00:49:12 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:19.989 00:49:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:19.989 00:49:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:19.989 00:49:12 -- common/autotest_common.sh@10 -- # set +x 00:15:25.265 00:49:17 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:25.265 00:49:17 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:25.265 00:49:17 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:25.265 00:49:17 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:25.265 00:49:17 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:25.265 00:49:17 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:25.266 00:49:17 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:25.266 00:49:17 -- nvmf/common.sh@295 -- # net_devs=() 00:15:25.266 00:49:17 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:25.266 00:49:17 -- nvmf/common.sh@296 -- # e810=() 00:15:25.266 00:49:17 -- nvmf/common.sh@296 -- # local -ga e810 00:15:25.266 00:49:17 -- nvmf/common.sh@297 -- # x722=() 00:15:25.266 00:49:17 -- nvmf/common.sh@297 -- # local -ga x722 00:15:25.266 00:49:17 -- nvmf/common.sh@298 -- # mlx=() 00:15:25.266 00:49:17 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:25.266 00:49:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:25.266 00:49:17 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:25.266 00:49:17 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.266 00:49:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:25.266 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:25.266 00:49:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:25.266 00:49:17 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:25.266 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:25.266 00:49:17 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.266 00:49:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.266 00:49:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.266 00:49:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:25.266 Found net devices under 0000:27:00.0: cvl_0_0 00:15:25.266 00:49:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.266 00:49:17 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:25.266 00:49:17 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:25.266 00:49:17 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:25.266 00:49:17 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:25.266 Found net devices under 0000:27:00.1: cvl_0_1 00:15:25.266 00:49:17 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:25.266 00:49:17 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:25.266 00:49:17 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:25.266 00:49:17 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:25.266 00:49:17 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:25.266 00:49:17 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:25.266 00:49:17 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:25.266 00:49:17 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:25.266 00:49:17 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:25.266 00:49:17 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:25.266 00:49:17 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:25.266 00:49:17 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:25.266 00:49:17 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:25.266 00:49:17 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:25.266 00:49:17 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:25.266 00:49:17 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:25.266 00:49:17 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:25.525 00:49:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:25.526 00:49:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:25.526 00:49:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:25.526 00:49:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:25.526 00:49:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:25.526 00:49:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:25.526 00:49:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:25.526 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:25.526 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.544 ms 00:15:25.526 00:15:25.526 --- 10.0.0.2 ping statistics --- 00:15:25.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.526 rtt min/avg/max/mdev = 0.544/0.544/0.544/0.000 ms 00:15:25.526 00:49:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:25.526 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:25.526 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:15:25.526 00:15:25.526 --- 10.0.0.1 ping statistics --- 00:15:25.526 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:25.526 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:15:25.526 00:49:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:25.526 00:49:18 -- nvmf/common.sh@411 -- # return 0 00:15:25.526 00:49:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:25.526 00:49:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:25.526 00:49:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:25.526 00:49:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:25.526 00:49:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:25.526 00:49:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:25.526 00:49:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:25.526 00:49:18 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:15:25.526 00:49:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:25.526 00:49:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:25.526 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:25.526 00:49:18 -- nvmf/common.sh@470 -- # nvmfpid=2718105 00:15:25.526 00:49:18 -- nvmf/common.sh@471 -- # waitforlisten 2718105 00:15:25.526 00:49:18 -- common/autotest_common.sh@817 -- # '[' -z 2718105 ']' 00:15:25.526 00:49:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.526 00:49:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:25.526 00:49:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:25.526 00:49:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.526 00:49:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:25.526 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:25.785 [2024-04-27 00:49:18.295069] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:25.785 [2024-04-27 00:49:18.295193] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.785 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.785 [2024-04-27 00:49:18.437143] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.045 [2024-04-27 00:49:18.536954] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.045 [2024-04-27 00:49:18.536995] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.045 [2024-04-27 00:49:18.537005] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.045 [2024-04-27 00:49:18.537018] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.045 [2024-04-27 00:49:18.537031] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.045 [2024-04-27 00:49:18.537167] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:26.045 [2024-04-27 00:49:18.537150] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.045 [2024-04-27 00:49:18.537178] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.304 00:49:18 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:26.304 00:49:18 -- common/autotest_common.sh@850 -- # return 0 00:15:26.304 00:49:18 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:26.304 00:49:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:26.304 00:49:18 -- common/autotest_common.sh@10 -- # set +x 00:15:26.562 00:49:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.562 00:49:19 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:26.562 [2024-04-27 00:49:19.142396] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.562 00:49:19 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:26.822 00:49:19 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:15:26.823 00:49:19 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:15:26.823 00:49:19 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:15:26.823 00:49:19 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:15:27.081 00:49:19 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:15:27.341 00:49:19 -- target/nvmf_lvol.sh@29 -- # lvs=fb0f4dd0-59bd-4e93-b8bf-f2c3c07dffc1 00:15:27.341 00:49:19 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fb0f4dd0-59bd-4e93-b8bf-f2c3c07dffc1 lvol 20 00:15:27.341 00:49:19 -- target/nvmf_lvol.sh@32 -- # lvol=3537a451-0f55-4c60-bcaf-48aabc70fb60 00:15:27.341 00:49:19 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:27.601 00:49:20 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3537a451-0f55-4c60-bcaf-48aabc70fb60 00:15:27.601 00:49:20 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:27.859 [2024-04-27 00:49:20.422417] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:27.859 00:49:20 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:28.117 00:49:20 -- target/nvmf_lvol.sh@42 -- # perf_pid=2718614 00:15:28.117 00:49:20 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:15:28.117 00:49:20 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:15:28.117 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.053 00:49:21 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3537a451-0f55-4c60-bcaf-48aabc70fb60 MY_SNAPSHOT 00:15:29.314 00:49:21 -- target/nvmf_lvol.sh@47 -- # snapshot=86ecc828-2aeb-4285-8616-5f0cb073c03d 00:15:29.314 00:49:21 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3537a451-0f55-4c60-bcaf-48aabc70fb60 30 00:15:29.314 00:49:21 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 86ecc828-2aeb-4285-8616-5f0cb073c03d MY_CLONE 00:15:29.573 00:49:22 -- target/nvmf_lvol.sh@49 -- # clone=1fd0b4b1-daa3-4d2e-b2b1-71953f6e342f 00:15:29.573 00:49:22 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1fd0b4b1-daa3-4d2e-b2b1-71953f6e342f 00:15:30.141 00:49:22 -- target/nvmf_lvol.sh@53 -- # wait 2718614 00:15:38.267 Initializing NVMe Controllers 00:15:38.267 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:38.267 Controller IO queue size 128, less than required. 00:15:38.267 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:38.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:38.267 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:38.267 Initialization complete. Launching workers. 00:15:38.267 ======================================================== 00:15:38.267 Latency(us) 00:15:38.267 Device Information : IOPS MiB/s Average min max 00:15:38.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12880.94 50.32 9939.33 1428.69 85744.75 00:15:38.267 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12786.84 49.95 10009.91 2604.40 60968.13 00:15:38.267 ======================================================== 00:15:38.267 Total : 25667.78 100.26 9974.49 1428.69 85744.75 00:15:38.267 00:15:38.267 00:49:30 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:38.525 00:49:31 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3537a451-0f55-4c60-bcaf-48aabc70fb60 00:15:38.786 00:49:31 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fb0f4dd0-59bd-4e93-b8bf-f2c3c07dffc1 00:15:38.786 00:49:31 -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:38.786 00:49:31 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:38.786 00:49:31 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:38.786 00:49:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:15:38.786 00:49:31 -- nvmf/common.sh@117 -- # sync 00:15:38.786 00:49:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:38.786 00:49:31 -- nvmf/common.sh@120 -- # set +e 00:15:38.786 00:49:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:38.786 00:49:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:38.786 rmmod nvme_tcp 00:15:38.786 rmmod nvme_fabrics 00:15:38.786 rmmod nvme_keyring 00:15:38.786 00:49:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:38.786 00:49:31 -- nvmf/common.sh@124 -- # set -e 00:15:38.786 00:49:31 -- nvmf/common.sh@125 -- # return 0 00:15:38.786 00:49:31 -- nvmf/common.sh@478 -- # '[' -n 2718105 ']' 00:15:38.786 00:49:31 -- nvmf/common.sh@479 -- # killprocess 2718105 00:15:38.786 00:49:31 -- common/autotest_common.sh@936 -- # '[' -z 2718105 ']' 00:15:38.786 00:49:31 -- common/autotest_common.sh@940 -- # kill -0 2718105 00:15:38.786 00:49:31 -- common/autotest_common.sh@941 -- # uname 00:15:38.786 00:49:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:38.786 00:49:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2718105 00:15:39.055 00:49:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.055 00:49:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.055 00:49:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2718105' 00:15:39.055 killing process with pid 2718105 00:15:39.055 00:49:31 -- common/autotest_common.sh@955 -- # kill 2718105 00:15:39.055 00:49:31 -- common/autotest_common.sh@960 -- # wait 2718105 00:15:39.657 00:49:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:15:39.657 00:49:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:15:39.657 00:49:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:15:39.657 00:49:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:39.657 00:49:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:39.657 00:49:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:39.657 00:49:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:39.657 00:49:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.564 00:49:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:41.564 00:15:41.564 real 0m21.595s 00:15:41.564 user 1m2.789s 00:15:41.564 sys 0m6.416s 00:15:41.564 00:49:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:15:41.564 00:49:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.564 ************************************ 00:15:41.564 END TEST nvmf_lvol 00:15:41.564 ************************************ 00:15:41.564 00:49:34 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:41.564 00:49:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:41.564 00:49:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:41.564 00:49:34 -- common/autotest_common.sh@10 -- # set +x 00:15:41.823 ************************************ 00:15:41.823 START TEST nvmf_lvs_grow 00:15:41.823 ************************************ 00:15:41.823 00:49:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:41.823 * Looking for test storage... 00:15:41.823 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:15:41.823 00:49:34 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:15:41.823 00:49:34 -- nvmf/common.sh@7 -- # uname -s 00:15:41.823 00:49:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:41.823 00:49:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:41.823 00:49:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:41.823 00:49:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:41.823 00:49:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:41.823 00:49:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:41.823 00:49:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:41.823 00:49:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:41.823 00:49:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:41.823 00:49:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:41.823 00:49:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:15:41.823 00:49:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:15:41.823 00:49:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:41.823 00:49:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:41.823 00:49:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:15:41.823 00:49:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:41.823 00:49:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:15:41.823 00:49:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:41.823 00:49:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:41.823 00:49:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:41.823 00:49:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 00:49:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 00:49:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 00:49:34 -- paths/export.sh@5 -- # export PATH 00:15:41.823 00:49:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:41.823 00:49:34 -- nvmf/common.sh@47 -- # : 0 00:15:41.823 00:49:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:41.823 00:49:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:41.823 00:49:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:41.823 00:49:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:41.823 00:49:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:41.823 00:49:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:41.823 00:49:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:41.823 00:49:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:41.823 00:49:34 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:15:41.823 00:49:34 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:41.823 00:49:34 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:15:41.823 00:49:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:15:41.823 00:49:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:41.823 00:49:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:15:41.823 00:49:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:15:41.823 00:49:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:15:41.823 00:49:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:41.823 00:49:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:41.823 00:49:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:41.823 00:49:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:15:41.823 00:49:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:15:41.823 00:49:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:15:41.823 00:49:34 -- common/autotest_common.sh@10 -- # set +x 00:15:47.160 00:49:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:47.160 00:49:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:15:47.160 00:49:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:47.160 00:49:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:47.160 00:49:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:47.160 00:49:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:47.160 00:49:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:47.160 00:49:39 -- nvmf/common.sh@295 -- # net_devs=() 00:15:47.160 00:49:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:47.160 00:49:39 -- nvmf/common.sh@296 -- # e810=() 00:15:47.160 00:49:39 -- nvmf/common.sh@296 -- # local -ga e810 00:15:47.160 00:49:39 -- nvmf/common.sh@297 -- # x722=() 00:15:47.160 00:49:39 -- nvmf/common.sh@297 -- # local -ga x722 00:15:47.160 00:49:39 -- nvmf/common.sh@298 -- # mlx=() 00:15:47.160 00:49:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:15:47.160 00:49:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:47.160 00:49:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:47.160 00:49:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:47.161 00:49:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:47.161 00:49:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.161 00:49:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:15:47.161 Found 0000:27:00.0 (0x8086 - 0x159b) 00:15:47.161 00:49:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:47.161 00:49:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:15:47.161 Found 0000:27:00.1 (0x8086 - 0x159b) 00:15:47.161 00:49:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.161 00:49:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.161 00:49:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.161 00:49:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:15:47.161 Found net devices under 0000:27:00.0: cvl_0_0 00:15:47.161 00:49:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.161 00:49:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:47.161 00:49:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:47.161 00:49:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:47.161 00:49:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:15:47.161 Found net devices under 0000:27:00.1: cvl_0_1 00:15:47.161 00:49:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:15:47.161 00:49:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:15:47.161 00:49:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:15:47.161 00:49:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:15:47.161 00:49:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:47.161 00:49:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:47.161 00:49:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:47.161 00:49:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:47.161 00:49:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:47.161 00:49:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:47.161 00:49:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:47.161 00:49:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:47.161 00:49:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:47.161 00:49:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:47.161 00:49:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:47.161 00:49:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:47.161 00:49:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:47.161 00:49:39 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:47.161 00:49:39 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:47.161 00:49:39 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:47.161 00:49:39 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:47.422 00:49:39 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:47.422 00:49:39 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:47.422 00:49:39 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:47.422 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:47.422 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:15:47.422 00:15:47.422 --- 10.0.0.2 ping statistics --- 00:15:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.422 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:15:47.422 00:49:39 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:47.422 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:47.422 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.346 ms 00:15:47.422 00:15:47.422 --- 10.0.0.1 ping statistics --- 00:15:47.422 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:47.422 rtt min/avg/max/mdev = 0.346/0.346/0.346/0.000 ms 00:15:47.422 00:49:39 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:47.422 00:49:39 -- nvmf/common.sh@411 -- # return 0 00:15:47.422 00:49:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:15:47.422 00:49:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:47.422 00:49:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:15:47.422 00:49:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:15:47.422 00:49:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:47.422 00:49:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:15:47.422 00:49:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:15:47.422 00:49:39 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:15:47.422 00:49:39 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:15:47.422 00:49:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:47.422 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 00:49:39 -- nvmf/common.sh@470 -- # nvmfpid=2724895 00:15:47.422 00:49:39 -- nvmf/common.sh@471 -- # waitforlisten 2724895 00:15:47.422 00:49:39 -- common/autotest_common.sh@817 -- # '[' -z 2724895 ']' 00:15:47.422 00:49:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.422 00:49:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:47.422 00:49:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.422 00:49:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:47.422 00:49:39 -- common/autotest_common.sh@10 -- # set +x 00:15:47.422 00:49:39 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:47.422 [2024-04-27 00:49:40.034941] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:47.422 [2024-04-27 00:49:40.035055] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:47.683 EAL: No free 2048 kB hugepages reported on node 1 00:15:47.683 [2024-04-27 00:49:40.167825] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.683 [2024-04-27 00:49:40.266540] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:47.683 [2024-04-27 00:49:40.266578] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:47.683 [2024-04-27 00:49:40.266588] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:47.683 [2024-04-27 00:49:40.266597] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:47.683 [2024-04-27 00:49:40.266606] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:47.683 [2024-04-27 00:49:40.266634] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:48.251 00:49:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:48.251 00:49:40 -- common/autotest_common.sh@850 -- # return 0 00:15:48.251 00:49:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:15:48.251 00:49:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:15:48.252 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:15:48.252 00:49:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:48.252 00:49:40 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:48.252 [2024-04-27 00:49:40.875058] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:48.252 00:49:40 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:15:48.252 00:49:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:15:48.252 00:49:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:48.252 00:49:40 -- common/autotest_common.sh@10 -- # set +x 00:15:48.510 ************************************ 00:15:48.510 START TEST lvs_grow_clean 00:15:48.510 ************************************ 00:15:48.510 00:49:40 -- common/autotest_common.sh@1111 -- # lvs_grow 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:48.510 00:49:40 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:48.510 00:49:41 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:48.510 00:49:41 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@28 -- # lvs=4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:48.770 00:49:41 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 lvol 150 00:15:49.029 00:49:41 -- target/nvmf_lvs_grow.sh@33 -- # lvol=e24d02c4-890a-4174-ae96-0878271680cf 00:15:49.029 00:49:41 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:49.029 00:49:41 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:49.029 [2024-04-27 00:49:41.698003] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:49.029 [2024-04-27 00:49:41.698072] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:49.029 true 00:15:49.029 00:49:41 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:15:49.029 00:49:41 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:49.290 00:49:41 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:49.290 00:49:41 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:49.550 00:49:41 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e24d02c4-890a-4174-ae96-0878271680cf 00:15:49.550 00:49:42 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:49.550 [2024-04-27 00:49:42.230429] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:49.809 00:49:42 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:49.809 00:49:42 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2725258 00:15:49.809 00:49:42 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:49.809 00:49:42 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2725258 /var/tmp/bdevperf.sock 00:15:49.809 00:49:42 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:49.809 00:49:42 -- common/autotest_common.sh@817 -- # '[' -z 2725258 ']' 00:15:49.809 00:49:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:49.809 00:49:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:15:49.809 00:49:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:49.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:49.809 00:49:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:15:49.809 00:49:42 -- common/autotest_common.sh@10 -- # set +x 00:15:49.809 [2024-04-27 00:49:42.449506] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:15:49.809 [2024-04-27 00:49:42.449624] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2725258 ] 00:15:50.065 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.065 [2024-04-27 00:49:42.583838] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:50.065 [2024-04-27 00:49:42.722405] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:50.638 00:49:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:15:50.638 00:49:43 -- common/autotest_common.sh@850 -- # return 0 00:15:50.638 00:49:43 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:50.899 Nvme0n1 00:15:50.899 00:49:43 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:50.899 [ 00:15:50.899 { 00:15:50.899 "name": "Nvme0n1", 00:15:50.899 "aliases": [ 00:15:50.899 "e24d02c4-890a-4174-ae96-0878271680cf" 00:15:50.899 ], 00:15:50.899 "product_name": "NVMe disk", 00:15:50.899 "block_size": 4096, 00:15:50.899 "num_blocks": 38912, 00:15:50.899 "uuid": "e24d02c4-890a-4174-ae96-0878271680cf", 00:15:50.899 "assigned_rate_limits": { 00:15:50.899 "rw_ios_per_sec": 0, 00:15:50.899 "rw_mbytes_per_sec": 0, 00:15:50.899 "r_mbytes_per_sec": 0, 00:15:50.899 "w_mbytes_per_sec": 0 00:15:50.899 }, 00:15:50.899 "claimed": false, 00:15:50.899 "zoned": false, 00:15:50.899 "supported_io_types": { 00:15:50.899 "read": true, 00:15:50.899 "write": true, 00:15:50.899 "unmap": true, 00:15:50.899 "write_zeroes": true, 00:15:50.899 "flush": true, 00:15:50.899 "reset": true, 00:15:50.899 "compare": true, 00:15:50.899 "compare_and_write": true, 00:15:50.899 "abort": true, 00:15:50.899 "nvme_admin": true, 00:15:50.899 "nvme_io": true 00:15:50.899 }, 00:15:50.899 "memory_domains": [ 00:15:50.899 { 00:15:50.899 "dma_device_id": "system", 00:15:50.899 "dma_device_type": 1 00:15:50.899 } 00:15:50.899 ], 00:15:50.899 "driver_specific": { 00:15:50.899 "nvme": [ 00:15:50.899 { 00:15:50.899 "trid": { 00:15:50.899 "trtype": "TCP", 00:15:50.899 "adrfam": "IPv4", 00:15:50.899 "traddr": "10.0.0.2", 00:15:50.899 "trsvcid": "4420", 00:15:50.899 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:50.899 }, 00:15:50.899 "ctrlr_data": { 00:15:50.899 "cntlid": 1, 00:15:50.899 "vendor_id": "0x8086", 00:15:50.899 "model_number": "SPDK bdev Controller", 00:15:50.899 "serial_number": "SPDK0", 00:15:50.899 "firmware_revision": "24.05", 00:15:50.899 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:50.899 "oacs": { 00:15:50.899 "security": 0, 00:15:50.899 "format": 0, 00:15:50.899 "firmware": 0, 00:15:50.899 "ns_manage": 0 00:15:50.899 }, 00:15:50.899 "multi_ctrlr": true, 00:15:50.899 "ana_reporting": false 00:15:50.899 }, 00:15:50.899 "vs": { 00:15:50.899 "nvme_version": "1.3" 00:15:50.899 }, 00:15:50.899 "ns_data": { 00:15:50.899 "id": 1, 00:15:50.899 "can_share": true 00:15:50.899 } 00:15:50.899 } 00:15:50.899 ], 00:15:50.899 "mp_policy": "active_passive" 00:15:50.899 } 00:15:50.899 } 00:15:50.899 ] 00:15:50.899 00:49:43 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2725552 00:15:50.899 00:49:43 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:50.899 00:49:43 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:50.899 Running I/O for 10 seconds... 00:15:52.278 Latency(us) 00:15:52.278 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:52.278 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:52.278 Nvme0n1 : 1.00 22597.00 88.27 0.00 0.00 0.00 0.00 0.00 00:15:52.278 =================================================================================================================== 00:15:52.278 Total : 22597.00 88.27 0.00 0.00 0.00 0.00 0.00 00:15:52.278 00:15:52.849 00:49:45 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:15:53.107 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.107 Nvme0n1 : 2.00 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:15:53.107 =================================================================================================================== 00:15:53.107 Total : 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:15:53.107 00:15:53.107 true 00:15:53.107 00:49:45 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:53.107 00:49:45 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:15:53.366 00:49:45 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:53.366 00:49:45 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:53.366 00:49:45 -- target/nvmf_lvs_grow.sh@65 -- # wait 2725552 00:15:53.942 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:53.942 Nvme0n1 : 3.00 22954.00 89.66 0.00 0.00 0.00 0.00 0.00 00:15:53.943 =================================================================================================================== 00:15:53.943 Total : 22954.00 89.66 0.00 0.00 0.00 0.00 0.00 00:15:53.943 00:15:55.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.322 Nvme0n1 : 4.00 22953.25 89.66 0.00 0.00 0.00 0.00 0.00 00:15:55.322 =================================================================================================================== 00:15:55.322 Total : 22953.25 89.66 0.00 0.00 0.00 0.00 0.00 00:15:55.322 00:15:55.888 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:55.888 Nvme0n1 : 5.00 23021.40 89.93 0.00 0.00 0.00 0.00 0.00 00:15:55.888 =================================================================================================================== 00:15:55.888 Total : 23021.40 89.93 0.00 0.00 0.00 0.00 0.00 00:15:55.888 00:15:57.281 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:57.281 Nvme0n1 : 6.00 23031.33 89.97 0.00 0.00 0.00 0.00 0.00 00:15:57.281 =================================================================================================================== 00:15:57.281 Total : 23031.33 89.97 0.00 0.00 0.00 0.00 0.00 00:15:57.281 00:15:58.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:58.218 Nvme0n1 : 7.00 23044.00 90.02 0.00 0.00 0.00 0.00 0.00 00:15:58.218 =================================================================================================================== 00:15:58.218 Total : 23044.00 90.02 0.00 0.00 0.00 0.00 0.00 00:15:58.218 00:15:59.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:59.153 Nvme0n1 : 8.00 23047.50 90.03 0.00 0.00 0.00 0.00 0.00 00:15:59.153 =================================================================================================================== 00:15:59.153 Total : 23047.50 90.03 0.00 0.00 0.00 0.00 0.00 00:15:59.153 00:16:00.090 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:00.090 Nvme0n1 : 9.00 23062.78 90.09 0.00 0.00 0.00 0.00 0.00 00:16:00.090 =================================================================================================================== 00:16:00.090 Total : 23062.78 90.09 0.00 0.00 0.00 0.00 0.00 00:16:00.090 00:16:01.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.061 Nvme0n1 : 10.00 23076.70 90.14 0.00 0.00 0.00 0.00 0.00 00:16:01.061 =================================================================================================================== 00:16:01.061 Total : 23076.70 90.14 0.00 0.00 0.00 0.00 0.00 00:16:01.061 00:16:01.061 00:16:01.061 Latency(us) 00:16:01.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.061 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:01.062 Nvme0n1 : 10.00 23066.71 90.10 0.00 0.00 5544.82 1741.88 12210.39 00:16:01.062 =================================================================================================================== 00:16:01.062 Total : 23066.71 90.10 0.00 0.00 5544.82 1741.88 12210.39 00:16:01.062 0 00:16:01.062 00:49:53 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2725258 00:16:01.062 00:49:53 -- common/autotest_common.sh@936 -- # '[' -z 2725258 ']' 00:16:01.062 00:49:53 -- common/autotest_common.sh@940 -- # kill -0 2725258 00:16:01.062 00:49:53 -- common/autotest_common.sh@941 -- # uname 00:16:01.062 00:49:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.062 00:49:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2725258 00:16:01.062 00:49:53 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:01.062 00:49:53 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:01.062 00:49:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2725258' 00:16:01.062 killing process with pid 2725258 00:16:01.062 00:49:53 -- common/autotest_common.sh@955 -- # kill 2725258 00:16:01.062 Received shutdown signal, test time was about 10.000000 seconds 00:16:01.062 00:16:01.062 Latency(us) 00:16:01.062 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:01.062 =================================================================================================================== 00:16:01.062 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:01.062 00:49:53 -- common/autotest_common.sh@960 -- # wait 2725258 00:16:01.322 00:49:54 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:01.581 00:49:54 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:01.581 00:49:54 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:01.839 00:49:54 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:01.839 00:49:54 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:16:01.839 00:49:54 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:01.839 [2024-04-27 00:49:54.435815] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:01.839 00:49:54 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:01.839 00:49:54 -- common/autotest_common.sh@638 -- # local es=0 00:16:01.839 00:49:54 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:01.839 00:49:54 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:01.839 00:49:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:01.839 00:49:54 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:01.839 00:49:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:01.839 00:49:54 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:01.839 00:49:54 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:01.839 00:49:54 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:01.839 00:49:54 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:16:01.839 00:49:54 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:02.097 request: 00:16:02.097 { 00:16:02.097 "uuid": "4d7785e7-2c81-40b3-ada3-8f9cca823d30", 00:16:02.097 "method": "bdev_lvol_get_lvstores", 00:16:02.097 "req_id": 1 00:16:02.097 } 00:16:02.097 Got JSON-RPC error response 00:16:02.097 response: 00:16:02.097 { 00:16:02.097 "code": -19, 00:16:02.097 "message": "No such device" 00:16:02.097 } 00:16:02.097 00:49:54 -- common/autotest_common.sh@641 -- # es=1 00:16:02.097 00:49:54 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:02.097 00:49:54 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:02.097 00:49:54 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:02.097 00:49:54 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:02.097 aio_bdev 00:16:02.097 00:49:54 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev e24d02c4-890a-4174-ae96-0878271680cf 00:16:02.097 00:49:54 -- common/autotest_common.sh@885 -- # local bdev_name=e24d02c4-890a-4174-ae96-0878271680cf 00:16:02.097 00:49:54 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:02.097 00:49:54 -- common/autotest_common.sh@887 -- # local i 00:16:02.097 00:49:54 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:02.097 00:49:54 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:02.097 00:49:54 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:02.355 00:49:54 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e24d02c4-890a-4174-ae96-0878271680cf -t 2000 00:16:02.355 [ 00:16:02.355 { 00:16:02.355 "name": "e24d02c4-890a-4174-ae96-0878271680cf", 00:16:02.355 "aliases": [ 00:16:02.355 "lvs/lvol" 00:16:02.355 ], 00:16:02.355 "product_name": "Logical Volume", 00:16:02.355 "block_size": 4096, 00:16:02.355 "num_blocks": 38912, 00:16:02.355 "uuid": "e24d02c4-890a-4174-ae96-0878271680cf", 00:16:02.355 "assigned_rate_limits": { 00:16:02.355 "rw_ios_per_sec": 0, 00:16:02.355 "rw_mbytes_per_sec": 0, 00:16:02.355 "r_mbytes_per_sec": 0, 00:16:02.355 "w_mbytes_per_sec": 0 00:16:02.355 }, 00:16:02.355 "claimed": false, 00:16:02.356 "zoned": false, 00:16:02.356 "supported_io_types": { 00:16:02.356 "read": true, 00:16:02.356 "write": true, 00:16:02.356 "unmap": true, 00:16:02.356 "write_zeroes": true, 00:16:02.356 "flush": false, 00:16:02.356 "reset": true, 00:16:02.356 "compare": false, 00:16:02.356 "compare_and_write": false, 00:16:02.356 "abort": false, 00:16:02.356 "nvme_admin": false, 00:16:02.356 "nvme_io": false 00:16:02.356 }, 00:16:02.356 "driver_specific": { 00:16:02.356 "lvol": { 00:16:02.356 "lvol_store_uuid": "4d7785e7-2c81-40b3-ada3-8f9cca823d30", 00:16:02.356 "base_bdev": "aio_bdev", 00:16:02.356 "thin_provision": false, 00:16:02.356 "snapshot": false, 00:16:02.356 "clone": false, 00:16:02.356 "esnap_clone": false 00:16:02.356 } 00:16:02.356 } 00:16:02.356 } 00:16:02.356 ] 00:16:02.356 00:49:55 -- common/autotest_common.sh@893 -- # return 0 00:16:02.356 00:49:55 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:02.356 00:49:55 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:02.614 00:49:55 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:02.614 00:49:55 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:02.614 00:49:55 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:02.873 00:49:55 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:02.873 00:49:55 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e24d02c4-890a-4174-ae96-0878271680cf 00:16:02.873 00:49:55 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4d7785e7-2c81-40b3-ada3-8f9cca823d30 00:16:03.133 00:49:55 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:03.133 00:49:55 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:03.133 00:16:03.133 real 0m14.791s 00:16:03.133 user 0m14.426s 00:16:03.133 sys 0m1.179s 00:16:03.133 00:49:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:03.133 00:49:55 -- common/autotest_common.sh@10 -- # set +x 00:16:03.133 ************************************ 00:16:03.133 END TEST lvs_grow_clean 00:16:03.133 ************************************ 00:16:03.133 00:49:55 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:16:03.133 00:49:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:03.133 00:49:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:03.133 00:49:55 -- common/autotest_common.sh@10 -- # set +x 00:16:03.391 ************************************ 00:16:03.391 START TEST lvs_grow_dirty 00:16:03.391 ************************************ 00:16:03.391 00:49:55 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:03.391 00:49:55 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:03.648 00:49:56 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:16:03.648 00:49:56 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:16:03.648 00:49:56 -- target/nvmf_lvs_grow.sh@28 -- # lvs=89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:03.648 00:49:56 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:03.648 00:49:56 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 89f957ec-971a-4c5f-a8f9-b62c5904801c lvol 150 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@33 -- # lvol=506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:03.906 00:49:56 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:16:04.164 [2024-04-27 00:49:56.629043] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:16:04.164 [2024-04-27 00:49:56.629108] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:16:04.164 true 00:16:04.164 00:49:56 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:04.164 00:49:56 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:16:04.164 00:49:56 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:16:04.164 00:49:56 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:04.425 00:49:56 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:04.425 00:49:57 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:04.686 00:49:57 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:04.686 00:49:57 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2728305 00:16:04.686 00:49:57 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:04.686 00:49:57 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2728305 /var/tmp/bdevperf.sock 00:16:04.686 00:49:57 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:16:04.686 00:49:57 -- common/autotest_common.sh@817 -- # '[' -z 2728305 ']' 00:16:04.686 00:49:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:04.686 00:49:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:04.686 00:49:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:04.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:04.686 00:49:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:04.686 00:49:57 -- common/autotest_common.sh@10 -- # set +x 00:16:04.945 [2024-04-27 00:49:57.389853] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:04.945 [2024-04-27 00:49:57.389976] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2728305 ] 00:16:04.945 EAL: No free 2048 kB hugepages reported on node 1 00:16:04.945 [2024-04-27 00:49:57.505523] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:04.945 [2024-04-27 00:49:57.595909] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:05.510 00:49:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:05.510 00:49:58 -- common/autotest_common.sh@850 -- # return 0 00:16:05.510 00:49:58 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:16:05.769 Nvme0n1 00:16:05.769 00:49:58 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:16:06.030 [ 00:16:06.030 { 00:16:06.030 "name": "Nvme0n1", 00:16:06.030 "aliases": [ 00:16:06.030 "506c7304-9c3a-42eb-ba6b-308bc83dcbd5" 00:16:06.030 ], 00:16:06.030 "product_name": "NVMe disk", 00:16:06.030 "block_size": 4096, 00:16:06.030 "num_blocks": 38912, 00:16:06.030 "uuid": "506c7304-9c3a-42eb-ba6b-308bc83dcbd5", 00:16:06.030 "assigned_rate_limits": { 00:16:06.030 "rw_ios_per_sec": 0, 00:16:06.030 "rw_mbytes_per_sec": 0, 00:16:06.030 "r_mbytes_per_sec": 0, 00:16:06.030 "w_mbytes_per_sec": 0 00:16:06.030 }, 00:16:06.030 "claimed": false, 00:16:06.030 "zoned": false, 00:16:06.030 "supported_io_types": { 00:16:06.030 "read": true, 00:16:06.030 "write": true, 00:16:06.030 "unmap": true, 00:16:06.030 "write_zeroes": true, 00:16:06.030 "flush": true, 00:16:06.030 "reset": true, 00:16:06.030 "compare": true, 00:16:06.030 "compare_and_write": true, 00:16:06.030 "abort": true, 00:16:06.030 "nvme_admin": true, 00:16:06.030 "nvme_io": true 00:16:06.030 }, 00:16:06.030 "memory_domains": [ 00:16:06.030 { 00:16:06.030 "dma_device_id": "system", 00:16:06.030 "dma_device_type": 1 00:16:06.030 } 00:16:06.030 ], 00:16:06.030 "driver_specific": { 00:16:06.030 "nvme": [ 00:16:06.030 { 00:16:06.030 "trid": { 00:16:06.030 "trtype": "TCP", 00:16:06.030 "adrfam": "IPv4", 00:16:06.030 "traddr": "10.0.0.2", 00:16:06.030 "trsvcid": "4420", 00:16:06.030 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:16:06.030 }, 00:16:06.030 "ctrlr_data": { 00:16:06.030 "cntlid": 1, 00:16:06.030 "vendor_id": "0x8086", 00:16:06.030 "model_number": "SPDK bdev Controller", 00:16:06.030 "serial_number": "SPDK0", 00:16:06.030 "firmware_revision": "24.05", 00:16:06.030 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:06.030 "oacs": { 00:16:06.030 "security": 0, 00:16:06.030 "format": 0, 00:16:06.030 "firmware": 0, 00:16:06.030 "ns_manage": 0 00:16:06.030 }, 00:16:06.030 "multi_ctrlr": true, 00:16:06.030 "ana_reporting": false 00:16:06.030 }, 00:16:06.030 "vs": { 00:16:06.030 "nvme_version": "1.3" 00:16:06.030 }, 00:16:06.030 "ns_data": { 00:16:06.030 "id": 1, 00:16:06.030 "can_share": true 00:16:06.030 } 00:16:06.030 } 00:16:06.030 ], 00:16:06.030 "mp_policy": "active_passive" 00:16:06.030 } 00:16:06.030 } 00:16:06.030 ] 00:16:06.030 00:49:58 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2728605 00:16:06.030 00:49:58 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:16:06.030 00:49:58 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:06.030 Running I/O for 10 seconds... 00:16:07.410 Latency(us) 00:16:07.410 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:07.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:07.410 Nvme0n1 : 1.00 22799.00 89.06 0.00 0.00 0.00 0.00 0.00 00:16:07.410 =================================================================================================================== 00:16:07.410 Total : 22799.00 89.06 0.00 0.00 0.00 0.00 0.00 00:16:07.410 00:16:07.980 00:50:00 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:08.240 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:08.240 Nvme0n1 : 2.00 22964.00 89.70 0.00 0.00 0.00 0.00 0.00 00:16:08.240 =================================================================================================================== 00:16:08.240 Total : 22964.00 89.70 0.00 0.00 0.00 0.00 0.00 00:16:08.240 00:16:08.240 true 00:16:08.240 00:50:00 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:08.240 00:50:00 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:16:08.240 00:50:00 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:16:08.240 00:50:00 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:16:08.240 00:50:00 -- target/nvmf_lvs_grow.sh@65 -- # wait 2728605 00:16:09.175 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:09.175 Nvme0n1 : 3.00 22958.33 89.68 0.00 0.00 0.00 0.00 0.00 00:16:09.175 =================================================================================================================== 00:16:09.175 Total : 22958.33 89.68 0.00 0.00 0.00 0.00 0.00 00:16:09.175 00:16:10.119 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:10.119 Nvme0n1 : 4.00 23011.00 89.89 0.00 0.00 0.00 0.00 0.00 00:16:10.119 =================================================================================================================== 00:16:10.119 Total : 23011.00 89.89 0.00 0.00 0.00 0.00 0.00 00:16:10.119 00:16:11.053 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:11.053 Nvme0n1 : 5.00 23032.20 89.97 0.00 0.00 0.00 0.00 0.00 00:16:11.053 =================================================================================================================== 00:16:11.053 Total : 23032.20 89.97 0.00 0.00 0.00 0.00 0.00 00:16:11.053 00:16:12.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:12.004 Nvme0n1 : 6.00 23045.83 90.02 0.00 0.00 0.00 0.00 0.00 00:16:12.004 =================================================================================================================== 00:16:12.004 Total : 23045.83 90.02 0.00 0.00 0.00 0.00 0.00 00:16:12.004 00:16:13.047 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.047 Nvme0n1 : 7.00 23079.29 90.15 0.00 0.00 0.00 0.00 0.00 00:16:13.047 =================================================================================================================== 00:16:13.047 Total : 23079.29 90.15 0.00 0.00 0.00 0.00 0.00 00:16:13.047 00:16:13.982 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:13.982 Nvme0n1 : 8.00 23082.38 90.17 0.00 0.00 0.00 0.00 0.00 00:16:13.982 =================================================================================================================== 00:16:13.982 Total : 23082.38 90.17 0.00 0.00 0.00 0.00 0.00 00:16:13.982 00:16:15.361 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:15.361 Nvme0n1 : 9.00 23075.22 90.14 0.00 0.00 0.00 0.00 0.00 00:16:15.361 =================================================================================================================== 00:16:15.361 Total : 23075.22 90.14 0.00 0.00 0.00 0.00 0.00 00:16:15.361 00:16:16.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.298 Nvme0n1 : 10.00 23098.20 90.23 0.00 0.00 0.00 0.00 0.00 00:16:16.298 =================================================================================================================== 00:16:16.298 Total : 23098.20 90.23 0.00 0.00 0.00 0.00 0.00 00:16:16.298 00:16:16.298 00:16:16.298 Latency(us) 00:16:16.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.298 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:16:16.298 Nvme0n1 : 10.00 23093.64 90.21 0.00 0.00 5539.39 1621.15 12141.41 00:16:16.298 =================================================================================================================== 00:16:16.298 Total : 23093.64 90.21 0.00 0.00 5539.39 1621.15 12141.41 00:16:16.298 0 00:16:16.298 00:50:08 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2728305 00:16:16.298 00:50:08 -- common/autotest_common.sh@936 -- # '[' -z 2728305 ']' 00:16:16.298 00:50:08 -- common/autotest_common.sh@940 -- # kill -0 2728305 00:16:16.298 00:50:08 -- common/autotest_common.sh@941 -- # uname 00:16:16.298 00:50:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:16.298 00:50:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2728305 00:16:16.298 00:50:08 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:16.298 00:50:08 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:16.298 00:50:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2728305' 00:16:16.298 killing process with pid 2728305 00:16:16.298 00:50:08 -- common/autotest_common.sh@955 -- # kill 2728305 00:16:16.298 Received shutdown signal, test time was about 10.000000 seconds 00:16:16.298 00:16:16.298 Latency(us) 00:16:16.298 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:16.298 =================================================================================================================== 00:16:16.298 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:16.298 00:50:08 -- common/autotest_common.sh@960 -- # wait 2728305 00:16:16.557 00:50:09 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 2724895 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@74 -- # wait 2724895 00:16:16.817 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 2724895 Killed "${NVMF_APP[@]}" "$@" 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@74 -- # true 00:16:16.817 00:50:09 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:16:16.817 00:50:09 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:16.817 00:50:09 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:16.817 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:16:16.817 00:50:09 -- nvmf/common.sh@470 -- # nvmfpid=2730703 00:16:16.817 00:50:09 -- nvmf/common.sh@471 -- # waitforlisten 2730703 00:16:16.817 00:50:09 -- common/autotest_common.sh@817 -- # '[' -z 2730703 ']' 00:16:16.817 00:50:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.817 00:50:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:16.817 00:50:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.817 00:50:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:16.817 00:50:09 -- common/autotest_common.sh@10 -- # set +x 00:16:16.817 00:50:09 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:17.076 [2024-04-27 00:50:09.588620] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:17.076 [2024-04-27 00:50:09.588728] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:17.076 EAL: No free 2048 kB hugepages reported on node 1 00:16:17.076 [2024-04-27 00:50:09.712687] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:17.333 [2024-04-27 00:50:09.804935] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:17.333 [2024-04-27 00:50:09.804968] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:17.333 [2024-04-27 00:50:09.804978] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:17.333 [2024-04-27 00:50:09.804987] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:17.333 [2024-04-27 00:50:09.804994] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:17.333 [2024-04-27 00:50:09.805025] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.592 00:50:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:17.592 00:50:10 -- common/autotest_common.sh@850 -- # return 0 00:16:17.592 00:50:10 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:17.592 00:50:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:17.592 00:50:10 -- common/autotest_common.sh@10 -- # set +x 00:16:17.851 00:50:10 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:17.851 00:50:10 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:17.852 [2024-04-27 00:50:10.436465] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:16:17.852 [2024-04-27 00:50:10.436616] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:16:17.852 [2024-04-27 00:50:10.436651] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:16:17.852 00:50:10 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:16:17.852 00:50:10 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:17.852 00:50:10 -- common/autotest_common.sh@885 -- # local bdev_name=506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:17.852 00:50:10 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:17.852 00:50:10 -- common/autotest_common.sh@887 -- # local i 00:16:17.852 00:50:10 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:17.852 00:50:10 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:17.852 00:50:10 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:18.111 00:50:10 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 -t 2000 00:16:18.111 [ 00:16:18.111 { 00:16:18.111 "name": "506c7304-9c3a-42eb-ba6b-308bc83dcbd5", 00:16:18.111 "aliases": [ 00:16:18.111 "lvs/lvol" 00:16:18.111 ], 00:16:18.111 "product_name": "Logical Volume", 00:16:18.111 "block_size": 4096, 00:16:18.111 "num_blocks": 38912, 00:16:18.111 "uuid": "506c7304-9c3a-42eb-ba6b-308bc83dcbd5", 00:16:18.111 "assigned_rate_limits": { 00:16:18.111 "rw_ios_per_sec": 0, 00:16:18.111 "rw_mbytes_per_sec": 0, 00:16:18.111 "r_mbytes_per_sec": 0, 00:16:18.111 "w_mbytes_per_sec": 0 00:16:18.111 }, 00:16:18.111 "claimed": false, 00:16:18.111 "zoned": false, 00:16:18.111 "supported_io_types": { 00:16:18.111 "read": true, 00:16:18.111 "write": true, 00:16:18.111 "unmap": true, 00:16:18.111 "write_zeroes": true, 00:16:18.111 "flush": false, 00:16:18.111 "reset": true, 00:16:18.111 "compare": false, 00:16:18.111 "compare_and_write": false, 00:16:18.111 "abort": false, 00:16:18.111 "nvme_admin": false, 00:16:18.111 "nvme_io": false 00:16:18.111 }, 00:16:18.111 "driver_specific": { 00:16:18.111 "lvol": { 00:16:18.111 "lvol_store_uuid": "89f957ec-971a-4c5f-a8f9-b62c5904801c", 00:16:18.111 "base_bdev": "aio_bdev", 00:16:18.111 "thin_provision": false, 00:16:18.111 "snapshot": false, 00:16:18.111 "clone": false, 00:16:18.111 "esnap_clone": false 00:16:18.111 } 00:16:18.111 } 00:16:18.111 } 00:16:18.111 ] 00:16:18.111 00:50:10 -- common/autotest_common.sh@893 -- # return 0 00:16:18.111 00:50:10 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:18.111 00:50:10 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:16:18.371 00:50:10 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:16:18.371 00:50:10 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:18.371 00:50:10 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:16:18.371 00:50:11 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:16:18.371 00:50:11 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:18.630 [2024-04-27 00:50:11.162320] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:16:18.630 00:50:11 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:18.630 00:50:11 -- common/autotest_common.sh@638 -- # local es=0 00:16:18.630 00:50:11 -- common/autotest_common.sh@640 -- # valid_exec_arg /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:18.630 00:50:11 -- common/autotest_common.sh@626 -- # local arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:18.630 00:50:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.630 00:50:11 -- common/autotest_common.sh@630 -- # type -t /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:18.630 00:50:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.630 00:50:11 -- common/autotest_common.sh@632 -- # type -P /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:18.630 00:50:11 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:16:18.630 00:50:11 -- common/autotest_common.sh@632 -- # arg=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:18.630 00:50:11 -- common/autotest_common.sh@632 -- # [[ -x /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py ]] 00:16:18.630 00:50:11 -- common/autotest_common.sh@641 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:18.630 request: 00:16:18.630 { 00:16:18.630 "uuid": "89f957ec-971a-4c5f-a8f9-b62c5904801c", 00:16:18.630 "method": "bdev_lvol_get_lvstores", 00:16:18.630 "req_id": 1 00:16:18.630 } 00:16:18.630 Got JSON-RPC error response 00:16:18.630 response: 00:16:18.630 { 00:16:18.630 "code": -19, 00:16:18.630 "message": "No such device" 00:16:18.630 } 00:16:18.889 00:50:11 -- common/autotest_common.sh@641 -- # es=1 00:16:18.889 00:50:11 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:16:18.889 00:50:11 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:16:18.889 00:50:11 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:16:18.889 00:50:11 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:16:18.889 aio_bdev 00:16:18.889 00:50:11 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:18.889 00:50:11 -- common/autotest_common.sh@885 -- # local bdev_name=506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:18.889 00:50:11 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:16:18.889 00:50:11 -- common/autotest_common.sh@887 -- # local i 00:16:18.889 00:50:11 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:16:18.889 00:50:11 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:16:18.889 00:50:11 -- common/autotest_common.sh@890 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:16:19.147 00:50:11 -- common/autotest_common.sh@892 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 -t 2000 00:16:19.147 [ 00:16:19.147 { 00:16:19.147 "name": "506c7304-9c3a-42eb-ba6b-308bc83dcbd5", 00:16:19.147 "aliases": [ 00:16:19.147 "lvs/lvol" 00:16:19.147 ], 00:16:19.147 "product_name": "Logical Volume", 00:16:19.147 "block_size": 4096, 00:16:19.147 "num_blocks": 38912, 00:16:19.147 "uuid": "506c7304-9c3a-42eb-ba6b-308bc83dcbd5", 00:16:19.147 "assigned_rate_limits": { 00:16:19.147 "rw_ios_per_sec": 0, 00:16:19.147 "rw_mbytes_per_sec": 0, 00:16:19.147 "r_mbytes_per_sec": 0, 00:16:19.147 "w_mbytes_per_sec": 0 00:16:19.147 }, 00:16:19.147 "claimed": false, 00:16:19.147 "zoned": false, 00:16:19.147 "supported_io_types": { 00:16:19.147 "read": true, 00:16:19.147 "write": true, 00:16:19.147 "unmap": true, 00:16:19.147 "write_zeroes": true, 00:16:19.147 "flush": false, 00:16:19.147 "reset": true, 00:16:19.147 "compare": false, 00:16:19.147 "compare_and_write": false, 00:16:19.147 "abort": false, 00:16:19.147 "nvme_admin": false, 00:16:19.147 "nvme_io": false 00:16:19.147 }, 00:16:19.147 "driver_specific": { 00:16:19.147 "lvol": { 00:16:19.147 "lvol_store_uuid": "89f957ec-971a-4c5f-a8f9-b62c5904801c", 00:16:19.147 "base_bdev": "aio_bdev", 00:16:19.147 "thin_provision": false, 00:16:19.147 "snapshot": false, 00:16:19.147 "clone": false, 00:16:19.147 "esnap_clone": false 00:16:19.147 } 00:16:19.147 } 00:16:19.147 } 00:16:19.147 ] 00:16:19.147 00:50:11 -- common/autotest_common.sh@893 -- # return 0 00:16:19.148 00:50:11 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:19.148 00:50:11 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:16:19.407 00:50:11 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:16:19.407 00:50:11 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:19.407 00:50:11 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:16:19.407 00:50:12 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:16:19.407 00:50:12 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 506c7304-9c3a-42eb-ba6b-308bc83dcbd5 00:16:19.667 00:50:12 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 89f957ec-971a-4c5f-a8f9-b62c5904801c 00:16:19.667 00:50:12 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:16:19.927 00:50:12 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:16:19.927 00:16:19.927 real 0m16.545s 00:16:19.927 user 0m43.097s 00:16:19.927 sys 0m2.961s 00:16:19.927 00:50:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:19.927 00:50:12 -- common/autotest_common.sh@10 -- # set +x 00:16:19.927 ************************************ 00:16:19.927 END TEST lvs_grow_dirty 00:16:19.927 ************************************ 00:16:19.927 00:50:12 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:16:19.927 00:50:12 -- common/autotest_common.sh@794 -- # type=--id 00:16:19.927 00:50:12 -- common/autotest_common.sh@795 -- # id=0 00:16:19.927 00:50:12 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:16:19.927 00:50:12 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:16:19.927 00:50:12 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:16:19.927 00:50:12 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:16:19.927 00:50:12 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:16:19.927 00:50:12 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:16:19.927 nvmf_trace.0 00:16:19.927 00:50:12 -- common/autotest_common.sh@809 -- # return 0 00:16:19.927 00:50:12 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:16:19.927 00:50:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:19.927 00:50:12 -- nvmf/common.sh@117 -- # sync 00:16:19.927 00:50:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:19.927 00:50:12 -- nvmf/common.sh@120 -- # set +e 00:16:19.927 00:50:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:19.927 00:50:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:19.927 rmmod nvme_tcp 00:16:19.927 rmmod nvme_fabrics 00:16:19.927 rmmod nvme_keyring 00:16:19.927 00:50:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:19.927 00:50:12 -- nvmf/common.sh@124 -- # set -e 00:16:19.927 00:50:12 -- nvmf/common.sh@125 -- # return 0 00:16:19.927 00:50:12 -- nvmf/common.sh@478 -- # '[' -n 2730703 ']' 00:16:19.927 00:50:12 -- nvmf/common.sh@479 -- # killprocess 2730703 00:16:19.927 00:50:12 -- common/autotest_common.sh@936 -- # '[' -z 2730703 ']' 00:16:19.927 00:50:12 -- common/autotest_common.sh@940 -- # kill -0 2730703 00:16:19.927 00:50:12 -- common/autotest_common.sh@941 -- # uname 00:16:20.187 00:50:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:20.187 00:50:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2730703 00:16:20.188 00:50:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:20.188 00:50:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:20.188 00:50:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2730703' 00:16:20.188 killing process with pid 2730703 00:16:20.188 00:50:12 -- common/autotest_common.sh@955 -- # kill 2730703 00:16:20.188 00:50:12 -- common/autotest_common.sh@960 -- # wait 2730703 00:16:20.445 00:50:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:20.445 00:50:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:20.445 00:50:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:20.445 00:50:13 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:20.445 00:50:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:20.445 00:50:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.445 00:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:20.445 00:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.984 00:50:15 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:22.984 00:16:22.984 real 0m40.891s 00:16:22.984 user 1m2.890s 00:16:22.984 sys 0m8.801s 00:16:22.984 00:50:15 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:22.984 00:50:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.984 ************************************ 00:16:22.984 END TEST nvmf_lvs_grow 00:16:22.984 ************************************ 00:16:22.984 00:50:15 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:22.984 00:50:15 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:22.984 00:50:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:22.984 00:50:15 -- common/autotest_common.sh@10 -- # set +x 00:16:22.984 ************************************ 00:16:22.984 START TEST nvmf_bdev_io_wait 00:16:22.984 ************************************ 00:16:22.984 00:50:15 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:16:22.984 * Looking for test storage... 00:16:22.984 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:22.984 00:50:15 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:22.984 00:50:15 -- nvmf/common.sh@7 -- # uname -s 00:16:22.984 00:50:15 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:22.984 00:50:15 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:22.984 00:50:15 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:22.984 00:50:15 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:22.985 00:50:15 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:22.985 00:50:15 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:22.985 00:50:15 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:22.985 00:50:15 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:22.985 00:50:15 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:22.985 00:50:15 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:22.985 00:50:15 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:16:22.985 00:50:15 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:16:22.985 00:50:15 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:22.985 00:50:15 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:22.985 00:50:15 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:22.985 00:50:15 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:22.985 00:50:15 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:22.985 00:50:15 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:22.985 00:50:15 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:22.985 00:50:15 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:22.985 00:50:15 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.985 00:50:15 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.985 00:50:15 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.985 00:50:15 -- paths/export.sh@5 -- # export PATH 00:16:22.985 00:50:15 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:22.985 00:50:15 -- nvmf/common.sh@47 -- # : 0 00:16:22.985 00:50:15 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:22.985 00:50:15 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:22.985 00:50:15 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:22.985 00:50:15 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:22.985 00:50:15 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:22.985 00:50:15 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:22.985 00:50:15 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:22.985 00:50:15 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:22.985 00:50:15 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:22.985 00:50:15 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:22.985 00:50:15 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:16:22.985 00:50:15 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:22.985 00:50:15 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:22.985 00:50:15 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:22.985 00:50:15 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:22.985 00:50:15 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:22.985 00:50:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:22.985 00:50:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:22.985 00:50:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:22.985 00:50:15 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:22.985 00:50:15 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:22.985 00:50:15 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:22.985 00:50:15 -- common/autotest_common.sh@10 -- # set +x 00:16:28.260 00:50:20 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:28.260 00:50:20 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:28.261 00:50:20 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:28.261 00:50:20 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:28.261 00:50:20 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:28.261 00:50:20 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:28.261 00:50:20 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:28.261 00:50:20 -- nvmf/common.sh@295 -- # net_devs=() 00:16:28.261 00:50:20 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:28.261 00:50:20 -- nvmf/common.sh@296 -- # e810=() 00:16:28.261 00:50:20 -- nvmf/common.sh@296 -- # local -ga e810 00:16:28.261 00:50:20 -- nvmf/common.sh@297 -- # x722=() 00:16:28.261 00:50:20 -- nvmf/common.sh@297 -- # local -ga x722 00:16:28.261 00:50:20 -- nvmf/common.sh@298 -- # mlx=() 00:16:28.261 00:50:20 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:28.261 00:50:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:28.261 00:50:20 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:28.261 00:50:20 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.261 00:50:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:28.261 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:28.261 00:50:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:28.261 00:50:20 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:28.261 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:28.261 00:50:20 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.261 00:50:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.261 00:50:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.261 00:50:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:28.261 Found net devices under 0000:27:00.0: cvl_0_0 00:16:28.261 00:50:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.261 00:50:20 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:28.261 00:50:20 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:28.261 00:50:20 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:28.261 00:50:20 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:28.261 Found net devices under 0000:27:00.1: cvl_0_1 00:16:28.261 00:50:20 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:28.261 00:50:20 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:28.261 00:50:20 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:28.261 00:50:20 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:28.261 00:50:20 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:28.261 00:50:20 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:28.261 00:50:20 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:28.261 00:50:20 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:28.261 00:50:20 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:28.261 00:50:20 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:28.261 00:50:20 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:28.261 00:50:20 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:28.261 00:50:20 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:28.261 00:50:20 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:28.261 00:50:20 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:28.261 00:50:20 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:28.261 00:50:20 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:28.261 00:50:20 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:28.261 00:50:20 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:28.261 00:50:20 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:28.261 00:50:20 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:28.261 00:50:20 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:28.261 00:50:20 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:28.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:28.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.514 ms 00:16:28.261 00:16:28.261 --- 10.0.0.2 ping statistics --- 00:16:28.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.261 rtt min/avg/max/mdev = 0.514/0.514/0.514/0.000 ms 00:16:28.261 00:50:20 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:28.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:28.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:16:28.261 00:16:28.261 --- 10.0.0.1 ping statistics --- 00:16:28.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:28.261 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:16:28.261 00:50:20 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:28.261 00:50:20 -- nvmf/common.sh@411 -- # return 0 00:16:28.261 00:50:20 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:28.261 00:50:20 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:28.261 00:50:20 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:28.261 00:50:20 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:28.261 00:50:20 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:28.261 00:50:20 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:28.261 00:50:20 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:16:28.261 00:50:20 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:28.261 00:50:20 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:28.261 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:16:28.261 00:50:20 -- nvmf/common.sh@470 -- # nvmfpid=2735268 00:16:28.261 00:50:20 -- nvmf/common.sh@471 -- # waitforlisten 2735268 00:16:28.261 00:50:20 -- common/autotest_common.sh@817 -- # '[' -z 2735268 ']' 00:16:28.261 00:50:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:28.261 00:50:20 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:16:28.261 00:50:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:28.261 00:50:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:28.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:28.261 00:50:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:28.261 00:50:20 -- common/autotest_common.sh@10 -- # set +x 00:16:28.261 [2024-04-27 00:50:20.846383] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:28.261 [2024-04-27 00:50:20.846457] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:28.261 EAL: No free 2048 kB hugepages reported on node 1 00:16:28.519 [2024-04-27 00:50:20.964628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:28.519 [2024-04-27 00:50:21.062113] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:28.519 [2024-04-27 00:50:21.062150] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:28.519 [2024-04-27 00:50:21.062162] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:28.519 [2024-04-27 00:50:21.062171] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:28.519 [2024-04-27 00:50:21.062179] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:28.519 [2024-04-27 00:50:21.062303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:28.519 [2024-04-27 00:50:21.062365] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:28.519 [2024-04-27 00:50:21.062401] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:28.519 [2024-04-27 00:50:21.062414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:29.086 00:50:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:29.086 00:50:21 -- common/autotest_common.sh@850 -- # return 0 00:16:29.086 00:50:21 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:29.086 00:50:21 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:29.086 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 00:50:21 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:29.086 00:50:21 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:16:29.086 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 00:50:21 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:16:29.086 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 00:50:21 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:29.086 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.086 [2024-04-27 00:50:21.737517] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:29.086 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.086 00:50:21 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:29.086 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.086 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.344 Malloc0 00:16:29.344 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.344 00:50:21 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:29.344 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.344 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.344 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.344 00:50:21 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:29.344 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.344 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.344 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:29.345 00:50:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:29.345 00:50:21 -- common/autotest_common.sh@10 -- # set +x 00:16:29.345 [2024-04-27 00:50:21.820925] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:29.345 00:50:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2735583 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@30 -- # READ_PID=2735584 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2735586 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # config=() 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # local subsystem config 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2735588 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:16:29.345 00:50:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@35 -- # sync 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.345 { 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme$subsystem", 00:16:29.345 "trtype": "$TEST_TRANSPORT", 00:16:29.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "$NVMF_PORT", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.345 "hdgst": ${hdgst:-false}, 00:16:29.345 "ddgst": ${ddgst:-false} 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 } 00:16:29.345 EOF 00:16:29.345 )") 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # config=() 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # local subsystem config 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:16:29.345 00:50:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.345 { 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme$subsystem", 00:16:29.345 "trtype": "$TEST_TRANSPORT", 00:16:29.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "$NVMF_PORT", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.345 "hdgst": ${hdgst:-false}, 00:16:29.345 "ddgst": ${ddgst:-false} 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 } 00:16:29.345 EOF 00:16:29.345 )") 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # config=() 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # local subsystem config 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # cat 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:16:29.345 00:50:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.345 { 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme$subsystem", 00:16:29.345 "trtype": "$TEST_TRANSPORT", 00:16:29.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "$NVMF_PORT", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.345 "hdgst": ${hdgst:-false}, 00:16:29.345 "ddgst": ${ddgst:-false} 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 } 00:16:29.345 EOF 00:16:29.345 )") 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # config=() 00:16:29.345 00:50:21 -- nvmf/common.sh@521 -- # local subsystem config 00:16:29.345 00:50:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:16:29.345 { 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme$subsystem", 00:16:29.345 "trtype": "$TEST_TRANSPORT", 00:16:29.345 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "$NVMF_PORT", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:29.345 "hdgst": ${hdgst:-false}, 00:16:29.345 "ddgst": ${ddgst:-false} 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 } 00:16:29.345 EOF 00:16:29.345 )") 00:16:29.345 00:50:21 -- target/bdev_io_wait.sh@37 -- # wait 2735583 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # cat 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # cat 00:16:29.345 00:50:21 -- nvmf/common.sh@543 -- # cat 00:16:29.345 00:50:21 -- nvmf/common.sh@545 -- # jq . 00:16:29.345 00:50:21 -- nvmf/common.sh@545 -- # jq . 00:16:29.345 00:50:21 -- nvmf/common.sh@545 -- # jq . 00:16:29.345 00:50:21 -- nvmf/common.sh@545 -- # jq . 00:16:29.345 00:50:21 -- nvmf/common.sh@546 -- # IFS=, 00:16:29.345 00:50:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme1", 00:16:29.345 "trtype": "tcp", 00:16:29.345 "traddr": "10.0.0.2", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "4420", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.345 "hdgst": false, 00:16:29.345 "ddgst": false 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 }' 00:16:29.345 00:50:21 -- nvmf/common.sh@546 -- # IFS=, 00:16:29.345 00:50:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme1", 00:16:29.345 "trtype": "tcp", 00:16:29.345 "traddr": "10.0.0.2", 00:16:29.345 "adrfam": "ipv4", 00:16:29.345 "trsvcid": "4420", 00:16:29.345 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.345 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.345 "hdgst": false, 00:16:29.345 "ddgst": false 00:16:29.345 }, 00:16:29.345 "method": "bdev_nvme_attach_controller" 00:16:29.345 }' 00:16:29.345 00:50:21 -- nvmf/common.sh@546 -- # IFS=, 00:16:29.345 00:50:21 -- nvmf/common.sh@546 -- # IFS=, 00:16:29.345 00:50:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:29.345 "params": { 00:16:29.345 "name": "Nvme1", 00:16:29.345 "trtype": "tcp", 00:16:29.345 "traddr": "10.0.0.2", 00:16:29.345 "adrfam": "ipv4", 00:16:29.346 "trsvcid": "4420", 00:16:29.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.346 "hdgst": false, 00:16:29.346 "ddgst": false 00:16:29.346 }, 00:16:29.346 "method": "bdev_nvme_attach_controller" 00:16:29.346 }' 00:16:29.346 00:50:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:16:29.346 "params": { 00:16:29.346 "name": "Nvme1", 00:16:29.346 "trtype": "tcp", 00:16:29.346 "traddr": "10.0.0.2", 00:16:29.346 "adrfam": "ipv4", 00:16:29.346 "trsvcid": "4420", 00:16:29.346 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:29.346 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:29.346 "hdgst": false, 00:16:29.346 "ddgst": false 00:16:29.346 }, 00:16:29.346 "method": "bdev_nvme_attach_controller" 00:16:29.346 }' 00:16:29.346 [2024-04-27 00:50:21.880884] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:29.346 [2024-04-27 00:50:21.880963] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:16:29.346 [2024-04-27 00:50:21.894154] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:29.346 [2024-04-27 00:50:21.894261] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:16:29.346 [2024-04-27 00:50:21.897499] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:29.346 [2024-04-27 00:50:21.897607] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:16:29.346 [2024-04-27 00:50:21.898426] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:29.346 [2024-04-27 00:50:21.898530] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:16:29.346 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.346 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.604 [2024-04-27 00:50:22.053718] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.604 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.604 EAL: No free 2048 kB hugepages reported on node 1 00:16:29.604 [2024-04-27 00:50:22.116518] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.604 [2024-04-27 00:50:22.162902] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.604 [2024-04-27 00:50:22.177718] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:16:29.604 [2024-04-27 00:50:22.209325] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:29.604 [2024-04-27 00:50:22.242889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:16:29.604 [2024-04-27 00:50:22.297340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:16:29.863 [2024-04-27 00:50:22.334839] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:16:30.124 Running I/O for 1 seconds... 00:16:30.124 Running I/O for 1 seconds... 00:16:30.124 Running I/O for 1 seconds... 00:16:30.124 Running I/O for 1 seconds... 00:16:31.086 00:16:31.086 Latency(us) 00:16:31.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.086 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:16:31.086 Nvme1n1 : 1.01 8125.62 31.74 0.00 0.00 15619.20 5622.30 24834.69 00:16:31.086 =================================================================================================================== 00:16:31.086 Total : 8125.62 31.74 0.00 0.00 15619.20 5622.30 24834.69 00:16:31.086 00:16:31.086 Latency(us) 00:16:31.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.086 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:16:31.086 Nvme1n1 : 1.01 12632.20 49.34 0.00 0.00 10091.31 6760.56 19453.84 00:16:31.086 =================================================================================================================== 00:16:31.086 Total : 12632.20 49.34 0.00 0.00 10091.31 6760.56 19453.84 00:16:31.086 00:16:31.086 Latency(us) 00:16:31.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.086 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:16:31.086 Nvme1n1 : 1.00 8630.90 33.71 0.00 0.00 14797.75 3138.83 35872.34 00:16:31.086 =================================================================================================================== 00:16:31.086 Total : 8630.90 33.71 0.00 0.00 14797.75 3138.83 35872.34 00:16:31.086 00:16:31.086 Latency(us) 00:16:31.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:31.086 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:16:31.086 Nvme1n1 : 1.00 227219.56 887.58 0.00 0.00 561.15 209.11 1095.14 00:16:31.086 =================================================================================================================== 00:16:31.086 Total : 227219.56 887.58 0.00 0.00 561.15 209.11 1095.14 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@38 -- # wait 2735584 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@39 -- # wait 2735586 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@40 -- # wait 2735588 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:31.653 00:50:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:31.653 00:50:24 -- common/autotest_common.sh@10 -- # set +x 00:16:31.653 00:50:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:16:31.653 00:50:24 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:16:31.653 00:50:24 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:31.653 00:50:24 -- nvmf/common.sh@117 -- # sync 00:16:31.653 00:50:24 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.653 00:50:24 -- nvmf/common.sh@120 -- # set +e 00:16:31.653 00:50:24 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.653 00:50:24 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.653 rmmod nvme_tcp 00:16:31.653 rmmod nvme_fabrics 00:16:31.653 rmmod nvme_keyring 00:16:31.653 00:50:24 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.653 00:50:24 -- nvmf/common.sh@124 -- # set -e 00:16:31.653 00:50:24 -- nvmf/common.sh@125 -- # return 0 00:16:31.653 00:50:24 -- nvmf/common.sh@478 -- # '[' -n 2735268 ']' 00:16:31.653 00:50:24 -- nvmf/common.sh@479 -- # killprocess 2735268 00:16:31.653 00:50:24 -- common/autotest_common.sh@936 -- # '[' -z 2735268 ']' 00:16:31.653 00:50:24 -- common/autotest_common.sh@940 -- # kill -0 2735268 00:16:31.653 00:50:24 -- common/autotest_common.sh@941 -- # uname 00:16:31.653 00:50:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:31.653 00:50:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2735268 00:16:31.653 00:50:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:31.653 00:50:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:31.653 00:50:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2735268' 00:16:31.653 killing process with pid 2735268 00:16:31.653 00:50:24 -- common/autotest_common.sh@955 -- # kill 2735268 00:16:31.653 00:50:24 -- common/autotest_common.sh@960 -- # wait 2735268 00:16:32.229 00:50:24 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:32.229 00:50:24 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:32.229 00:50:24 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:32.229 00:50:24 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:32.229 00:50:24 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:32.229 00:50:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:32.229 00:50:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:32.229 00:50:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.763 00:50:26 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:34.763 00:16:34.763 real 0m11.553s 00:16:34.763 user 0m23.502s 00:16:34.763 sys 0m5.679s 00:16:34.763 00:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.763 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:16:34.763 ************************************ 00:16:34.763 END TEST nvmf_bdev_io_wait 00:16:34.763 ************************************ 00:16:34.763 00:50:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.763 00:50:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:34.763 00:50:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.763 00:50:26 -- common/autotest_common.sh@10 -- # set +x 00:16:34.763 ************************************ 00:16:34.763 START TEST nvmf_queue_depth 00:16:34.763 ************************************ 00:16:34.763 00:50:26 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:34.763 * Looking for test storage... 00:16:34.763 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:34.763 00:50:27 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.763 00:50:27 -- nvmf/common.sh@7 -- # uname -s 00:16:34.763 00:50:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.763 00:50:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.763 00:50:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.763 00:50:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.763 00:50:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.763 00:50:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.763 00:50:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.763 00:50:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.763 00:50:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.763 00:50:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.763 00:50:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:16:34.763 00:50:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:16:34.763 00:50:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.763 00:50:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.763 00:50:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:34.763 00:50:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.763 00:50:27 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:34.763 00:50:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.763 00:50:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.763 00:50:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.763 00:50:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.763 00:50:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.763 00:50:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.763 00:50:27 -- paths/export.sh@5 -- # export PATH 00:16:34.763 00:50:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.763 00:50:27 -- nvmf/common.sh@47 -- # : 0 00:16:34.763 00:50:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.763 00:50:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.763 00:50:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.763 00:50:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.763 00:50:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.763 00:50:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.763 00:50:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.763 00:50:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.763 00:50:27 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:34.763 00:50:27 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:34.763 00:50:27 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:34.763 00:50:27 -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:34.763 00:50:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:34.763 00:50:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.763 00:50:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:34.763 00:50:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:34.763 00:50:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:34.763 00:50:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.763 00:50:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.763 00:50:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.763 00:50:27 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:34.763 00:50:27 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:34.763 00:50:27 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.763 00:50:27 -- common/autotest_common.sh@10 -- # set +x 00:16:40.041 00:50:31 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:40.041 00:50:31 -- nvmf/common.sh@291 -- # pci_devs=() 00:16:40.041 00:50:31 -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:40.041 00:50:31 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:40.041 00:50:31 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:40.041 00:50:31 -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:40.041 00:50:31 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:40.041 00:50:31 -- nvmf/common.sh@295 -- # net_devs=() 00:16:40.041 00:50:31 -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:40.041 00:50:31 -- nvmf/common.sh@296 -- # e810=() 00:16:40.041 00:50:31 -- nvmf/common.sh@296 -- # local -ga e810 00:16:40.041 00:50:31 -- nvmf/common.sh@297 -- # x722=() 00:16:40.041 00:50:31 -- nvmf/common.sh@297 -- # local -ga x722 00:16:40.041 00:50:31 -- nvmf/common.sh@298 -- # mlx=() 00:16:40.041 00:50:31 -- nvmf/common.sh@298 -- # local -ga mlx 00:16:40.041 00:50:31 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:40.041 00:50:31 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:40.041 00:50:31 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:40.041 00:50:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.041 00:50:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:16:40.041 Found 0000:27:00.0 (0x8086 - 0x159b) 00:16:40.041 00:50:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:40.041 00:50:31 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:16:40.041 Found 0000:27:00.1 (0x8086 - 0x159b) 00:16:40.041 00:50:31 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:40.041 00:50:31 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:16:40.041 00:50:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.041 00:50:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.041 00:50:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.041 00:50:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.041 00:50:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:16:40.041 Found net devices under 0000:27:00.0: cvl_0_0 00:16:40.042 00:50:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.042 00:50:31 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:40.042 00:50:31 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:40.042 00:50:31 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:16:40.042 00:50:31 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:40.042 00:50:31 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:16:40.042 Found net devices under 0000:27:00.1: cvl_0_1 00:16:40.042 00:50:31 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:16:40.042 00:50:31 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:16:40.042 00:50:31 -- nvmf/common.sh@403 -- # is_hw=yes 00:16:40.042 00:50:31 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:16:40.042 00:50:31 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:16:40.042 00:50:31 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:16:40.042 00:50:31 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:40.042 00:50:31 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:40.042 00:50:31 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:40.042 00:50:31 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:40.042 00:50:31 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:40.042 00:50:31 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:40.042 00:50:31 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:40.042 00:50:31 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:40.042 00:50:31 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:40.042 00:50:31 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:40.042 00:50:31 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:40.042 00:50:31 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:40.042 00:50:31 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:40.042 00:50:31 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:40.042 00:50:31 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:40.042 00:50:31 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:40.042 00:50:31 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:40.042 00:50:32 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:40.042 00:50:32 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:40.042 00:50:32 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:40.042 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:40.042 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:16:40.042 00:16:40.042 --- 10.0.0.2 ping statistics --- 00:16:40.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.042 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:16:40.042 00:50:32 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:40.042 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:40.042 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.058 ms 00:16:40.042 00:16:40.042 --- 10.0.0.1 ping statistics --- 00:16:40.042 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:40.042 rtt min/avg/max/mdev = 0.058/0.058/0.058/0.000 ms 00:16:40.042 00:50:32 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:40.042 00:50:32 -- nvmf/common.sh@411 -- # return 0 00:16:40.042 00:50:32 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:16:40.042 00:50:32 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:40.042 00:50:32 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:16:40.042 00:50:32 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:16:40.042 00:50:32 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:40.042 00:50:32 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:16:40.042 00:50:32 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:16:40.042 00:50:32 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:40.042 00:50:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:16:40.042 00:50:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:40.042 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.042 00:50:32 -- nvmf/common.sh@470 -- # nvmfpid=2739906 00:16:40.042 00:50:32 -- nvmf/common.sh@471 -- # waitforlisten 2739906 00:16:40.042 00:50:32 -- common/autotest_common.sh@817 -- # '[' -z 2739906 ']' 00:16:40.042 00:50:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:40.042 00:50:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.042 00:50:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:40.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:40.042 00:50:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.042 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.042 00:50:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:40.042 [2024-04-27 00:50:32.168725] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:40.042 [2024-04-27 00:50:32.168825] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:40.042 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.042 [2024-04-27 00:50:32.311003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.042 [2024-04-27 00:50:32.462570] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:40.042 [2024-04-27 00:50:32.462621] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:40.042 [2024-04-27 00:50:32.462636] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:40.042 [2024-04-27 00:50:32.462650] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:40.042 [2024-04-27 00:50:32.462663] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:40.042 [2024-04-27 00:50:32.462700] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:40.302 00:50:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:40.302 00:50:32 -- common/autotest_common.sh@850 -- # return 0 00:16:40.302 00:50:32 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:16:40.302 00:50:32 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:40.302 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 00:50:32 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:40.302 00:50:32 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:40.302 00:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.302 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 [2024-04-27 00:50:32.918438] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:40.302 00:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.302 00:50:32 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:40.302 00:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.302 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.302 Malloc0 00:16:40.302 00:50:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.302 00:50:32 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:40.302 00:50:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.302 00:50:32 -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 00:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.563 00:50:33 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:40.563 00:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.563 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 00:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.563 00:50:33 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:40.563 00:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:40.563 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 [2024-04-27 00:50:33.017815] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:40.563 00:50:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:40.563 00:50:33 -- target/queue_depth.sh@30 -- # bdevperf_pid=2740133 00:16:40.563 00:50:33 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:40.563 00:50:33 -- target/queue_depth.sh@33 -- # waitforlisten 2740133 /var/tmp/bdevperf.sock 00:16:40.563 00:50:33 -- common/autotest_common.sh@817 -- # '[' -z 2740133 ']' 00:16:40.563 00:50:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:40.563 00:50:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:40.563 00:50:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:40.563 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:40.563 00:50:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:40.563 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:40.563 00:50:33 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:40.563 [2024-04-27 00:50:33.102728] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:16:40.563 [2024-04-27 00:50:33.102852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2740133 ] 00:16:40.563 EAL: No free 2048 kB hugepages reported on node 1 00:16:40.563 [2024-04-27 00:50:33.226861] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.822 [2024-04-27 00:50:33.323185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:41.388 00:50:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:41.388 00:50:33 -- common/autotest_common.sh@850 -- # return 0 00:16:41.388 00:50:33 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:41.388 00:50:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:41.388 00:50:33 -- common/autotest_common.sh@10 -- # set +x 00:16:41.388 NVMe0n1 00:16:41.388 00:50:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:41.388 00:50:34 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:41.388 Running I/O for 10 seconds... 00:16:53.593 00:16:53.593 Latency(us) 00:16:53.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.593 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:53.593 Verification LBA range: start 0x0 length 0x4000 00:16:53.593 NVMe0n1 : 10.06 11851.25 46.29 0.00 0.00 86098.43 18074.14 80574.79 00:16:53.593 =================================================================================================================== 00:16:53.593 Total : 11851.25 46.29 0.00 0.00 86098.43 18074.14 80574.79 00:16:53.593 0 00:16:53.593 00:50:44 -- target/queue_depth.sh@39 -- # killprocess 2740133 00:16:53.593 00:50:44 -- common/autotest_common.sh@936 -- # '[' -z 2740133 ']' 00:16:53.593 00:50:44 -- common/autotest_common.sh@940 -- # kill -0 2740133 00:16:53.593 00:50:44 -- common/autotest_common.sh@941 -- # uname 00:16:53.593 00:50:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.593 00:50:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2740133 00:16:53.593 00:50:44 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:16:53.593 00:50:44 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:16:53.593 00:50:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2740133' 00:16:53.593 killing process with pid 2740133 00:16:53.593 00:50:44 -- common/autotest_common.sh@955 -- # kill 2740133 00:16:53.593 Received shutdown signal, test time was about 10.000000 seconds 00:16:53.593 00:16:53.593 Latency(us) 00:16:53.593 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.593 =================================================================================================================== 00:16:53.593 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.593 00:50:44 -- common/autotest_common.sh@960 -- # wait 2740133 00:16:53.593 00:50:44 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:53.593 00:50:44 -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:53.593 00:50:44 -- nvmf/common.sh@477 -- # nvmfcleanup 00:16:53.593 00:50:44 -- nvmf/common.sh@117 -- # sync 00:16:53.593 00:50:44 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:53.593 00:50:44 -- nvmf/common.sh@120 -- # set +e 00:16:53.593 00:50:44 -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:53.593 00:50:44 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:53.593 rmmod nvme_tcp 00:16:53.593 rmmod nvme_fabrics 00:16:53.593 rmmod nvme_keyring 00:16:53.593 00:50:44 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:53.593 00:50:44 -- nvmf/common.sh@124 -- # set -e 00:16:53.593 00:50:44 -- nvmf/common.sh@125 -- # return 0 00:16:53.593 00:50:44 -- nvmf/common.sh@478 -- # '[' -n 2739906 ']' 00:16:53.593 00:50:44 -- nvmf/common.sh@479 -- # killprocess 2739906 00:16:53.594 00:50:44 -- common/autotest_common.sh@936 -- # '[' -z 2739906 ']' 00:16:53.594 00:50:44 -- common/autotest_common.sh@940 -- # kill -0 2739906 00:16:53.594 00:50:44 -- common/autotest_common.sh@941 -- # uname 00:16:53.594 00:50:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:53.594 00:50:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2739906 00:16:53.594 00:50:44 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:53.594 00:50:44 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:53.594 00:50:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2739906' 00:16:53.594 killing process with pid 2739906 00:16:53.594 00:50:44 -- common/autotest_common.sh@955 -- # kill 2739906 00:16:53.594 00:50:44 -- common/autotest_common.sh@960 -- # wait 2739906 00:16:53.594 00:50:45 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:16:53.594 00:50:45 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:16:53.594 00:50:45 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:16:53.594 00:50:45 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:53.594 00:50:45 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:53.594 00:50:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:53.594 00:50:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:53.594 00:50:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.975 00:50:47 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:54.975 00:16:54.975 real 0m20.280s 00:16:54.975 user 0m25.303s 00:16:54.975 sys 0m5.154s 00:16:54.975 00:50:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:54.975 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:16:54.975 ************************************ 00:16:54.975 END TEST nvmf_queue_depth 00:16:54.975 ************************************ 00:16:54.976 00:50:47 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:54.976 00:50:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:54.976 00:50:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:54.976 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:16:54.976 ************************************ 00:16:54.976 START TEST nvmf_multipath 00:16:54.976 ************************************ 00:16:54.976 00:50:47 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:54.976 * Looking for test storage... 00:16:54.976 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:16:54.976 00:50:47 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:16:54.976 00:50:47 -- nvmf/common.sh@7 -- # uname -s 00:16:54.976 00:50:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:54.976 00:50:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:54.976 00:50:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:54.976 00:50:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:54.976 00:50:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:54.976 00:50:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:54.976 00:50:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:54.976 00:50:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:54.976 00:50:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:54.976 00:50:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:54.976 00:50:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:16:54.976 00:50:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:16:54.976 00:50:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:54.976 00:50:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:54.976 00:50:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:16:54.976 00:50:47 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:54.976 00:50:47 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:16:54.976 00:50:47 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:54.976 00:50:47 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:54.976 00:50:47 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:54.976 00:50:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.976 00:50:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.976 00:50:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.976 00:50:47 -- paths/export.sh@5 -- # export PATH 00:16:54.976 00:50:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:54.976 00:50:47 -- nvmf/common.sh@47 -- # : 0 00:16:54.976 00:50:47 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:54.976 00:50:47 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:54.976 00:50:47 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:54.976 00:50:47 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:54.976 00:50:47 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:54.976 00:50:47 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:54.976 00:50:47 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:54.976 00:50:47 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:54.976 00:50:47 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:54.976 00:50:47 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:54.976 00:50:47 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:54.976 00:50:47 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:16:54.976 00:50:47 -- target/multipath.sh@43 -- # nvmftestinit 00:16:54.976 00:50:47 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:16:54.976 00:50:47 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:54.976 00:50:47 -- nvmf/common.sh@437 -- # prepare_net_devs 00:16:54.976 00:50:47 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:16:54.976 00:50:47 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:16:54.976 00:50:47 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:54.976 00:50:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:54.976 00:50:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:54.976 00:50:47 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:16:54.976 00:50:47 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:16:54.976 00:50:47 -- nvmf/common.sh@285 -- # xtrace_disable 00:16:54.976 00:50:47 -- common/autotest_common.sh@10 -- # set +x 00:17:01.645 00:50:53 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:01.645 00:50:53 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:01.645 00:50:53 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:01.645 00:50:53 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:01.645 00:50:53 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:01.645 00:50:53 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:01.645 00:50:53 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:01.645 00:50:53 -- nvmf/common.sh@295 -- # net_devs=() 00:17:01.645 00:50:53 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:01.645 00:50:53 -- nvmf/common.sh@296 -- # e810=() 00:17:01.645 00:50:53 -- nvmf/common.sh@296 -- # local -ga e810 00:17:01.645 00:50:53 -- nvmf/common.sh@297 -- # x722=() 00:17:01.645 00:50:53 -- nvmf/common.sh@297 -- # local -ga x722 00:17:01.645 00:50:53 -- nvmf/common.sh@298 -- # mlx=() 00:17:01.645 00:50:53 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:01.645 00:50:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:01.645 00:50:53 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:01.645 00:50:53 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.645 00:50:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:01.645 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:01.645 00:50:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:01.645 00:50:53 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:01.645 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:01.645 00:50:53 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.645 00:50:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.645 00:50:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.645 00:50:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:01.645 Found net devices under 0000:27:00.0: cvl_0_0 00:17:01.645 00:50:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.645 00:50:53 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:01.645 00:50:53 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:01.645 00:50:53 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:01.645 00:50:53 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:01.645 Found net devices under 0000:27:00.1: cvl_0_1 00:17:01.645 00:50:53 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:01.645 00:50:53 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:01.645 00:50:53 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:01.645 00:50:53 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:01.645 00:50:53 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:01.645 00:50:53 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:01.645 00:50:53 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:01.645 00:50:53 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:01.645 00:50:53 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:01.645 00:50:53 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:01.645 00:50:53 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:01.645 00:50:53 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:01.645 00:50:53 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:01.645 00:50:53 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:01.645 00:50:53 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:01.645 00:50:53 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:01.645 00:50:53 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:01.645 00:50:53 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:01.645 00:50:53 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:01.645 00:50:53 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:01.645 00:50:53 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:01.645 00:50:53 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:01.645 00:50:53 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:01.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:01.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:17:01.645 00:17:01.645 --- 10.0.0.2 ping statistics --- 00:17:01.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.645 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:17:01.645 00:50:53 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:01.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:01.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:17:01.645 00:17:01.645 --- 10.0.0.1 ping statistics --- 00:17:01.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:01.645 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:17:01.645 00:50:53 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:01.645 00:50:53 -- nvmf/common.sh@411 -- # return 0 00:17:01.645 00:50:53 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:01.645 00:50:53 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:01.645 00:50:53 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:01.645 00:50:53 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:01.645 00:50:53 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:01.645 00:50:53 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:01.645 00:50:53 -- target/multipath.sh@45 -- # '[' -z ']' 00:17:01.645 00:50:53 -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:17:01.645 only one NIC for nvmf test 00:17:01.645 00:50:53 -- target/multipath.sh@47 -- # nvmftestfini 00:17:01.645 00:50:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:01.645 00:50:53 -- nvmf/common.sh@117 -- # sync 00:17:01.645 00:50:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.645 00:50:53 -- nvmf/common.sh@120 -- # set +e 00:17:01.645 00:50:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.645 00:50:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.645 rmmod nvme_tcp 00:17:01.645 rmmod nvme_fabrics 00:17:01.645 rmmod nvme_keyring 00:17:01.645 00:50:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.645 00:50:54 -- nvmf/common.sh@124 -- # set -e 00:17:01.645 00:50:54 -- nvmf/common.sh@125 -- # return 0 00:17:01.645 00:50:54 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:01.645 00:50:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:01.645 00:50:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:01.645 00:50:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:01.645 00:50:54 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.645 00:50:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.645 00:50:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.645 00:50:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:01.645 00:50:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.552 00:50:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.552 00:50:56 -- target/multipath.sh@48 -- # exit 0 00:17:03.552 00:50:56 -- target/multipath.sh@1 -- # nvmftestfini 00:17:03.552 00:50:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:03.552 00:50:56 -- nvmf/common.sh@117 -- # sync 00:17:03.552 00:50:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:03.552 00:50:56 -- nvmf/common.sh@120 -- # set +e 00:17:03.552 00:50:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:03.552 00:50:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:03.552 00:50:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:03.552 00:50:56 -- nvmf/common.sh@124 -- # set -e 00:17:03.552 00:50:56 -- nvmf/common.sh@125 -- # return 0 00:17:03.552 00:50:56 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:17:03.552 00:50:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:03.552 00:50:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:03.552 00:50:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:03.552 00:50:56 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:03.552 00:50:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:03.552 00:50:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.552 00:50:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.552 00:50:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.552 00:50:56 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.552 00:17:03.552 real 0m8.719s 00:17:03.552 user 0m1.808s 00:17:03.552 sys 0m4.855s 00:17:03.552 00:50:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:03.552 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.552 ************************************ 00:17:03.552 END TEST nvmf_multipath 00:17:03.552 ************************************ 00:17:03.552 00:50:56 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:03.552 00:50:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:03.552 00:50:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:03.552 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:17:03.813 ************************************ 00:17:03.813 START TEST nvmf_zcopy 00:17:03.813 ************************************ 00:17:03.813 00:50:56 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:17:03.813 * Looking for test storage... 00:17:03.813 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:03.813 00:50:56 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:03.813 00:50:56 -- nvmf/common.sh@7 -- # uname -s 00:17:03.813 00:50:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:03.813 00:50:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:03.813 00:50:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:03.813 00:50:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:03.813 00:50:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:03.813 00:50:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:03.813 00:50:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:03.813 00:50:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:03.813 00:50:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:03.813 00:50:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:03.813 00:50:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:17:03.813 00:50:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:17:03.813 00:50:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:03.813 00:50:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:03.813 00:50:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:03.813 00:50:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:03.813 00:50:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:03.813 00:50:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:03.813 00:50:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:03.813 00:50:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:03.813 00:50:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.813 00:50:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.813 00:50:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.813 00:50:56 -- paths/export.sh@5 -- # export PATH 00:17:03.813 00:50:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:03.813 00:50:56 -- nvmf/common.sh@47 -- # : 0 00:17:03.813 00:50:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:03.813 00:50:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:03.813 00:50:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:03.813 00:50:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:03.813 00:50:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:03.813 00:50:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:03.813 00:50:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:03.813 00:50:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:03.813 00:50:56 -- target/zcopy.sh@12 -- # nvmftestinit 00:17:03.813 00:50:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:03.813 00:50:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:03.813 00:50:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:03.813 00:50:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:03.813 00:50:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:03.813 00:50:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:03.813 00:50:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:03.813 00:50:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.813 00:50:56 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:03.813 00:50:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:03.813 00:50:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:03.813 00:50:56 -- common/autotest_common.sh@10 -- # set +x 00:17:10.390 00:51:02 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:10.390 00:51:02 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:10.390 00:51:02 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:10.390 00:51:02 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:10.390 00:51:02 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:10.390 00:51:02 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:10.390 00:51:02 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:10.390 00:51:02 -- nvmf/common.sh@295 -- # net_devs=() 00:17:10.390 00:51:02 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:10.390 00:51:02 -- nvmf/common.sh@296 -- # e810=() 00:17:10.390 00:51:02 -- nvmf/common.sh@296 -- # local -ga e810 00:17:10.390 00:51:02 -- nvmf/common.sh@297 -- # x722=() 00:17:10.390 00:51:02 -- nvmf/common.sh@297 -- # local -ga x722 00:17:10.390 00:51:02 -- nvmf/common.sh@298 -- # mlx=() 00:17:10.390 00:51:02 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:10.390 00:51:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:10.390 00:51:02 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:10.390 00:51:02 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:10.390 00:51:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.390 00:51:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:10.390 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:10.390 00:51:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.390 00:51:02 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:10.390 00:51:02 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:10.390 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:10.391 00:51:02 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:10.391 00:51:02 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.391 00:51:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.391 00:51:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.391 00:51:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.391 00:51:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:10.391 Found net devices under 0000:27:00.0: cvl_0_0 00:17:10.391 00:51:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.391 00:51:02 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:10.391 00:51:02 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:10.391 00:51:02 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:10.391 00:51:02 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:10.391 00:51:02 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:10.391 Found net devices under 0000:27:00.1: cvl_0_1 00:17:10.391 00:51:02 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:10.391 00:51:02 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:10.391 00:51:02 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:10.391 00:51:02 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:10.391 00:51:02 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:10.391 00:51:02 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:10.391 00:51:02 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:10.391 00:51:02 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:10.391 00:51:02 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:10.391 00:51:02 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:10.391 00:51:02 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:10.391 00:51:02 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:10.391 00:51:02 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:10.391 00:51:02 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:10.391 00:51:02 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:10.391 00:51:02 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:10.391 00:51:02 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:10.391 00:51:02 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:10.391 00:51:02 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:10.391 00:51:02 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:10.391 00:51:02 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:10.391 00:51:02 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:10.391 00:51:02 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:10.391 00:51:02 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:10.391 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:10.391 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.732 ms 00:17:10.391 00:17:10.391 --- 10.0.0.2 ping statistics --- 00:17:10.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.391 rtt min/avg/max/mdev = 0.732/0.732/0.732/0.000 ms 00:17:10.391 00:51:02 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:10.391 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:10.391 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:17:10.391 00:17:10.391 --- 10.0.0.1 ping statistics --- 00:17:10.391 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:10.391 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:17:10.391 00:51:02 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:10.391 00:51:02 -- nvmf/common.sh@411 -- # return 0 00:17:10.391 00:51:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:10.391 00:51:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:10.391 00:51:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:10.391 00:51:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:10.391 00:51:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:10.391 00:51:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:10.391 00:51:02 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:17:10.391 00:51:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:10.391 00:51:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:10.391 00:51:02 -- common/autotest_common.sh@10 -- # set +x 00:17:10.391 00:51:02 -- nvmf/common.sh@470 -- # nvmfpid=2750719 00:17:10.391 00:51:02 -- nvmf/common.sh@471 -- # waitforlisten 2750719 00:17:10.391 00:51:02 -- common/autotest_common.sh@817 -- # '[' -z 2750719 ']' 00:17:10.391 00:51:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:10.391 00:51:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:10.391 00:51:02 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:10.391 00:51:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:10.391 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:10.391 00:51:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:10.391 00:51:02 -- common/autotest_common.sh@10 -- # set +x 00:17:10.391 [2024-04-27 00:51:02.531530] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:10.391 [2024-04-27 00:51:02.531642] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:10.391 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.391 [2024-04-27 00:51:02.682684] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:10.391 [2024-04-27 00:51:02.846063] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:10.391 [2024-04-27 00:51:02.846127] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:10.391 [2024-04-27 00:51:02.846145] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:10.391 [2024-04-27 00:51:02.846162] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:10.392 [2024-04-27 00:51:02.846174] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:10.392 [2024-04-27 00:51:02.846232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:10.652 00:51:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:10.652 00:51:03 -- common/autotest_common.sh@850 -- # return 0 00:17:10.652 00:51:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:10.652 00:51:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 00:51:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:10.652 00:51:03 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:17:10.652 00:51:03 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:17:10.652 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 [2024-04-27 00:51:03.294825] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:10.652 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.652 00:51:03 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:10.652 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.652 00:51:03 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:10.652 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 [2024-04-27 00:51:03.311088] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:10.652 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.652 00:51:03 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:10.652 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.652 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.652 00:51:03 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:17:10.652 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.652 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.911 malloc0 00:17:10.911 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.911 00:51:03 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:10.911 00:51:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:10.911 00:51:03 -- common/autotest_common.sh@10 -- # set +x 00:17:10.911 00:51:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:10.911 00:51:03 -- target/zcopy.sh@33 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:17:10.911 00:51:03 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:17:10.911 00:51:03 -- nvmf/common.sh@521 -- # config=() 00:17:10.911 00:51:03 -- nvmf/common.sh@521 -- # local subsystem config 00:17:10.911 00:51:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:10.911 00:51:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:10.911 { 00:17:10.911 "params": { 00:17:10.911 "name": "Nvme$subsystem", 00:17:10.911 "trtype": "$TEST_TRANSPORT", 00:17:10.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:10.911 "adrfam": "ipv4", 00:17:10.911 "trsvcid": "$NVMF_PORT", 00:17:10.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:10.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:10.911 "hdgst": ${hdgst:-false}, 00:17:10.911 "ddgst": ${ddgst:-false} 00:17:10.911 }, 00:17:10.911 "method": "bdev_nvme_attach_controller" 00:17:10.911 } 00:17:10.911 EOF 00:17:10.911 )") 00:17:10.911 00:51:03 -- nvmf/common.sh@543 -- # cat 00:17:10.911 00:51:03 -- nvmf/common.sh@545 -- # jq . 00:17:10.911 00:51:03 -- nvmf/common.sh@546 -- # IFS=, 00:17:10.911 00:51:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:10.911 "params": { 00:17:10.911 "name": "Nvme1", 00:17:10.911 "trtype": "tcp", 00:17:10.911 "traddr": "10.0.0.2", 00:17:10.911 "adrfam": "ipv4", 00:17:10.911 "trsvcid": "4420", 00:17:10.911 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:10.911 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:10.911 "hdgst": false, 00:17:10.911 "ddgst": false 00:17:10.911 }, 00:17:10.911 "method": "bdev_nvme_attach_controller" 00:17:10.911 }' 00:17:10.911 [2024-04-27 00:51:03.445451] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:10.911 [2024-04-27 00:51:03.445541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750939 ] 00:17:10.911 EAL: No free 2048 kB hugepages reported on node 1 00:17:10.911 [2024-04-27 00:51:03.538197] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.171 [2024-04-27 00:51:03.630905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.431 Running I/O for 10 seconds... 00:17:21.408 00:17:21.408 Latency(us) 00:17:21.408 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:21.408 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:17:21.408 Verification LBA range: start 0x0 length 0x1000 00:17:21.408 Nvme1n1 : 10.01 8311.80 64.94 0.00 0.00 15361.08 1862.60 34216.69 00:17:21.408 =================================================================================================================== 00:17:21.409 Total : 8311.80 64.94 0.00 0.00 15361.08 1862.60 34216.69 00:17:21.668 00:51:14 -- target/zcopy.sh@39 -- # perfpid=2753467 00:17:21.668 00:51:14 -- target/zcopy.sh@41 -- # xtrace_disable 00:17:21.668 00:51:14 -- common/autotest_common.sh@10 -- # set +x 00:17:21.668 00:51:14 -- target/zcopy.sh@37 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:17:21.668 00:51:14 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:17:21.668 00:51:14 -- nvmf/common.sh@521 -- # config=() 00:17:21.668 00:51:14 -- nvmf/common.sh@521 -- # local subsystem config 00:17:21.668 00:51:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:17:21.668 00:51:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:17:21.668 { 00:17:21.668 "params": { 00:17:21.668 "name": "Nvme$subsystem", 00:17:21.668 "trtype": "$TEST_TRANSPORT", 00:17:21.668 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:21.668 "adrfam": "ipv4", 00:17:21.668 "trsvcid": "$NVMF_PORT", 00:17:21.668 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:21.668 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:21.668 "hdgst": ${hdgst:-false}, 00:17:21.668 "ddgst": ${ddgst:-false} 00:17:21.668 }, 00:17:21.668 "method": "bdev_nvme_attach_controller" 00:17:21.668 } 00:17:21.668 EOF 00:17:21.668 )") 00:17:21.668 [2024-04-27 00:51:14.353659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-04-27 00:51:14.353707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 00:51:14 -- nvmf/common.sh@543 -- # cat 00:17:21.668 00:51:14 -- nvmf/common.sh@545 -- # jq . 00:17:21.668 [2024-04-27 00:51:14.361609] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.668 [2024-04-27 00:51:14.361631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.668 00:51:14 -- nvmf/common.sh@546 -- # IFS=, 00:17:21.668 00:51:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:17:21.668 "params": { 00:17:21.668 "name": "Nvme1", 00:17:21.668 "trtype": "tcp", 00:17:21.668 "traddr": "10.0.0.2", 00:17:21.668 "adrfam": "ipv4", 00:17:21.668 "trsvcid": "4420", 00:17:21.668 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:21.668 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:21.668 "hdgst": false, 00:17:21.668 "ddgst": false 00:17:21.668 }, 00:17:21.668 "method": "bdev_nvme_attach_controller" 00:17:21.668 }' 00:17:21.929 [2024-04-27 00:51:14.369577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.369598] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.377593] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.377609] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.385586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.385600] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.393579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.393595] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.401592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.401607] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.409587] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.409603] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.417578] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.417593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.422056] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:21.929 [2024-04-27 00:51:14.422168] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753467 ] 00:17:21.929 [2024-04-27 00:51:14.425594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.425610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.433585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.433599] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.441592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.441606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.449596] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.449611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.457589] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.457604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.465599] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.465615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.473597] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.473612] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.481592] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.481606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.489604] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.489617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.497610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.497623] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 EAL: No free 2048 kB hugepages reported on node 1 00:17:21.929 [2024-04-27 00:51:14.505612] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.505631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.513607] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.513621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.521601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.521614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.529617] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.529631] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.537623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.537638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.538399] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:21.929 [2024-04-27 00:51:14.545610] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.545624] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.553630] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.553646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.561623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.561637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.569629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.569643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.577629] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.577643] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.585624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.585637] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.593641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.593654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.601634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.601648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.609631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.609644] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:21.929 [2024-04-27 00:51:14.617638] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:21.929 [2024-04-27 00:51:14.617651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.625635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.625650] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.627545] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:22.191 [2024-04-27 00:51:14.633647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.633662] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.641644] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.641658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.649640] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.649653] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.657654] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.657667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.665652] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.665665] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.673647] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.673660] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.681660] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.681674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.689665] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.689679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.697664] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.697677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.705667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.705684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.713666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.713684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.721671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.721684] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.729672] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.729687] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.737663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.737677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.745676] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.745689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.753669] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.753681] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.761682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.761696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.769680] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.769693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.777693] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.777717] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.785713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.785734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.793701] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.793720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.801728] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.801749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.809719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.809738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.191 [2024-04-27 00:51:14.817704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.191 [2024-04-27 00:51:14.817724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.825713] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.825728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.833710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.833725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.841699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.841714] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.849709] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.849724] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.857724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.857743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.865706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.865721] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.873719] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.873734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.192 [2024-04-27 00:51:14.881722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.192 [2024-04-27 00:51:14.881738] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.889720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.889736] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.897726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.897741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.905725] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.905744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.913737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.913752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.921739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.921753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.929726] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.929741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.937740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.937755] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.945734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.945750] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.451 [2024-04-27 00:51:14.953743] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.451 [2024-04-27 00:51:14.953758] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:14.961773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:14.961801] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 Running I/O for 5 seconds... 00:17:22.452 [2024-04-27 00:51:14.969750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:14.969766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:14.981853] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:14.981883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:14.989365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:14.989392] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:14.998458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:14.998484] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.007982] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.008009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.016668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.016692] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.026184] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.026212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.035577] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.035605] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.044745] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.044770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.054234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.054262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.063400] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.063427] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.072158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.072185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.081538] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.081564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.091051] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.091079] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.100498] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.100524] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.109695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.109720] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.119668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.119695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.128425] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.128452] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.137871] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.137899] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.452 [2024-04-27 00:51:15.147176] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.452 [2024-04-27 00:51:15.147201] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.156324] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.156352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.165585] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.165611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.175528] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.175557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.184343] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.184370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.193581] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.193608] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.203462] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.203489] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.213409] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.213435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.222166] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.222191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.231505] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.231530] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.241252] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.241277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.250649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.250677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.260458] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.260485] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.269819] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.269846] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.278539] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.278564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.287966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.287992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.297460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.297487] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.306945] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.306971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.316370] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.316397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.326202] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.326231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.335438] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.335464] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.711 [2024-04-27 00:51:15.345133] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.711 [2024-04-27 00:51:15.345158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.354383] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.354411] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.363621] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.363646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.373534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.373565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.382318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.382344] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.391554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.391581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.712 [2024-04-27 00:51:15.400708] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.712 [2024-04-27 00:51:15.400733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.410034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.410062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.419828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.419854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.429101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.429126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.438532] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.438557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.447678] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.447708] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.455554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.455580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.465722] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.465749] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.475075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.475102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.484537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.484565] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.493854] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.493882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.502534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.502560] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.511663] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.511689] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.521479] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.521503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.530751] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.530775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.539984] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.540008] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.549182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.549213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.559275] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.559308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.567360] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.567386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.578786] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.578815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.587688] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.587713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.596940] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.596969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.606036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.606060] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.615886] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.615911] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.624471] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.624496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.633772] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.633799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.642915] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.642940] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.652097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.652124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:22.970 [2024-04-27 00:51:15.661377] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:22.970 [2024-04-27 00:51:15.661404] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.670706] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.670734] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.680667] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.680693] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.689469] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.689495] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.699387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.699412] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.708799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.708826] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.718156] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.718181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.727574] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.727606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.737588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.737614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.746854] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.746880] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.756623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.756647] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.765451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.765479] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.775299] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.775324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.784724] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.784751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.793866] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.793890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.803795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.803822] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.812541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.812566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.822305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.822331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.831132] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.831158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.840316] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.840341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.849431] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.849458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.859207] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.859240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.868643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.868670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.877260] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.877285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.887125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.887151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.895888] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.895916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.905105] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.905134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.914943] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.914969] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.229 [2024-04-27 00:51:15.924318] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.229 [2024-04-27 00:51:15.924343] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.933500] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.933527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.943488] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.943513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.952093] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.952118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.960781] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.960805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.970773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.970798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.979478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.979505] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.988631] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.988657] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:15.998401] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:15.998428] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.008234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.008260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.017677] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.017703] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.026948] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.026972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.036199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.036229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.045590] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.045615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.055183] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.055210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.064245] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.064272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.074020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.074045] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.083496] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.083523] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.092739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.092767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.101946] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.101971] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.111273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.111301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.120412] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.120436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.130194] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.130226] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.139487] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.139513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.148785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.490 [2024-04-27 00:51:16.148811] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.490 [2024-04-27 00:51:16.158204] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.491 [2024-04-27 00:51:16.158243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.491 [2024-04-27 00:51:16.167588] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.491 [2024-04-27 00:51:16.167614] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.491 [2024-04-27 00:51:16.177066] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.491 [2024-04-27 00:51:16.177091] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.491 [2024-04-27 00:51:16.186755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.491 [2024-04-27 00:51:16.186783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.195985] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.196011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.205226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.205253] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.214782] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.214808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.223918] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.223944] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.233137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.233162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.242261] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.242286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.251887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.251914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.751 [2024-04-27 00:51:16.261300] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.751 [2024-04-27 00:51:16.261325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.270009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.270036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.279774] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.279800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.289145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.289171] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.298154] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.298181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.307878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.307904] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.317530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.317558] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.327372] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.327397] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.336137] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.336164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.345158] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.345184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.353838] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.353866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.363060] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.363088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.372978] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.373004] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.381875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.381901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.391160] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.391186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.400622] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.400648] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.409988] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.410014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.419174] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.419200] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.428381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.428406] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.438305] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.438332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:23.752 [2024-04-27 00:51:16.447009] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:23.752 [2024-04-27 00:51:16.447033] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.456193] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.456227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.464950] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.464974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.474101] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.474127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.483379] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.483405] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.492083] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.492108] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.501478] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.501503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.510912] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.510939] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.520139] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.520164] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.529429] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.529454] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.539216] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.539243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.548552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.548579] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.557255] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.557279] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.567182] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.567208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.576420] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.576445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.586097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.586125] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.595567] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.595593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.604651] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.604677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.614572] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.614597] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.623792] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.623817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.633643] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.633670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.642336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.642361] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.651625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.651652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.661521] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.661547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.670168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.670194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.679972] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.679998] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.688780] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.688805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.698523] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.698547] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.013 [2024-04-27 00:51:16.707423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.013 [2024-04-27 00:51:16.707451] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.717201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.717231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.725983] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.726009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.735131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.735156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.745054] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.745080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.754494] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.754519] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.764417] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.764444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.773784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.773810] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.783047] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.783080] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.792302] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.792329] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.801322] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.801347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.811114] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.811140] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.821130] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.821157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.829914] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.829941] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.839078] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.839103] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.848131] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.848157] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.857387] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.857413] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.867001] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.867026] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.876352] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.876379] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.885671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.885697] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.894347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.894375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.903659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.272 [2024-04-27 00:51:16.903685] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.272 [2024-04-27 00:51:16.912433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.912459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.921832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.921857] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.931149] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.931176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.940215] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.940247] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.949517] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.949545] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.958939] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.958970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.273 [2024-04-27 00:51:16.968718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.273 [2024-04-27 00:51:16.968745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:16.977371] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:16.977401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:16.986180] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:16.986205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:16.995353] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:16.995382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.004554] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.004581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.013956] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.013980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.023534] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.023564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.033075] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.033102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.041402] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.041433] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.051331] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.051358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.060188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.060215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.070199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.070232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.078971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.078997] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.087704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.087729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.531 [2024-04-27 00:51:17.096858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.531 [2024-04-27 00:51:17.096883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.105905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.105932] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.115575] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.115601] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.124952] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.124979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.134112] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.134142] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.143404] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.143431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.153141] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.153167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.162088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.162115] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.171238] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.171263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.179987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.180014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.189361] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.189387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.199165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.199193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.208591] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.208617] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.218284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.218312] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.532 [2024-04-27 00:51:17.227642] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.532 [2024-04-27 00:51:17.227670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.237623] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.237649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.246306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.246332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.255551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.255575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.264795] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.264819] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.273465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.273492] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.282878] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.282907] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.292002] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.790 [2024-04-27 00:51:17.292028] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.790 [2024-04-27 00:51:17.301188] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.301217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.310336] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.310477] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.319766] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.319794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.329522] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.329548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.339460] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.339486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.348765] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.348790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.357887] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.357914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.367307] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.367333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.376635] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.376663] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.385863] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.385888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.395611] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.395638] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.405052] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.405077] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.414288] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.414315] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.424128] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.424154] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.433506] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.433533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.443267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.443292] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.451962] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.451987] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.461695] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.461722] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.470326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.470352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:24.791 [2024-04-27 00:51:17.479548] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:24.791 [2024-04-27 00:51:17.479575] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.488875] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.488901] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.498428] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.498455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.507557] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.507583] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.516879] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.516905] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.527269] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.527295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.536112] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.536138] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.544949] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.544973] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.554146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.554172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.563317] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.563342] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.571968] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.571995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.581839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.581865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.590628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.590652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.599718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.599742] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.609034] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.609058] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.617761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.617785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.626435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.626461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.635911] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.635935] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.645858] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.645883] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.655659] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.655686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.664894] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.664919] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.674284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.674309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.683513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.683537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.692740] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.692766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.702476] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.702501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.711681] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.711709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.721067] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.721093] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.730832] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.730858] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.051 [2024-04-27 00:51:17.739568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.051 [2024-04-27 00:51:17.739593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.749405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.749432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.758568] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.758593] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.768039] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.768066] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.777321] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.777347] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.786675] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.786700] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.796301] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.796325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.805787] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.805814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.815028] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.815054] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.824465] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.824491] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.833685] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.833710] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.843090] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.843118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.852474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.852499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.862118] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.862145] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.871393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.871420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.880515] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.880539] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.889601] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.889625] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.899398] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.899426] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.908088] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.908113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.917347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.917375] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.926393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.926418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.935849] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.935875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.944484] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.944509] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.954201] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.954233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.963005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.963030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.972668] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.972696] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.981226] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.981250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.990625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.990652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.311 [2024-04-27 00:51:17.999704] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.311 [2024-04-27 00:51:17.999728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.009442] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.009468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.018547] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.018572] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.028368] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.028395] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.038195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.038229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.047013] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.047039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.056308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.056333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.065543] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.065567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.075366] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.075391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.084073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.084099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.093339] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.093364] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.102393] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.102419] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.111536] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.111561] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.121381] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.121410] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.130119] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.130144] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.139394] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.139420] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.148855] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.148881] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.158057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.571 [2024-04-27 00:51:18.158084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.571 [2024-04-27 00:51:18.167828] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.167853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.177106] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.177133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.186625] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.186654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.196451] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.196478] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.205746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.205772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.215666] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.215691] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.224502] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.224527] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.234315] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.234341] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.244173] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.244199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.252841] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.252865] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.572 [2024-04-27 00:51:18.262143] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.572 [2024-04-27 00:51:18.262168] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.271966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.271992] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.281277] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.281302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.291056] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.291084] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.300356] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.300382] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.309634] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.309658] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.318967] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.318995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.328278] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.328303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.338187] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.338213] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.346847] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.346871] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.356579] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.356604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.366019] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.366049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.375168] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.375194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.384284] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.384308] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.392898] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.392927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.401966] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.401991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.411325] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.411352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.421290] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.421316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.430423] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.430449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.439513] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.439540] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.448846] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.448874] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.458195] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.458240] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.467407] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.467436] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.481865] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.481892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.490619] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.490646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.500023] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.500049] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.509234] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.509262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:25.831 [2024-04-27 00:51:18.517073] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:25.831 [2024-04-27 00:51:18.517101] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.528097] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.528124] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.536893] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.536918] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.546720] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.546751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.556072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.556097] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.565620] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.565646] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.575684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.575713] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.584551] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.584577] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.593934] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.593961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.603410] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.603435] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.612702] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.612729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.622153] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.622179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.631146] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.631170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.640365] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.640393] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.649836] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.649861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.659747] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.659772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.668586] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.668610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.677763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.677791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.686533] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.686559] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.695839] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.695864] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.705267] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.705294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.714541] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.714566] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.724000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.724034] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.733449] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.733475] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.742701] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.742729] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.751932] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.751957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.761338] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.761367] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.770613] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.770639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.089 [2024-04-27 00:51:18.779987] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.089 [2024-04-27 00:51:18.780014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.789089] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.789117] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.798514] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.798541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.807775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.807800] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.817010] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.817036] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.826418] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.826443] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.835840] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.835866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.845062] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.845087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.854306] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.854332] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.863433] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.863461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.873232] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.873257] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.882115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.882143] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.891472] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.891498] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.901217] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.901250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.911045] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.911073] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.920486] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.920513] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.929835] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.929861] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.938606] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.938634] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.948040] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.948067] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.957414] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.957438] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.967270] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.967298] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.975900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.975926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.985308] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.985335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:18.994744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:18.994770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:19.003422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:19.003448] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:19.013065] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:19.013090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:19.021812] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:19.021840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:19.031000] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:19.031027] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.348 [2024-04-27 00:51:19.040405] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.348 [2024-04-27 00:51:19.040432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.049694] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.049719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.059015] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.059041] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.068900] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.068926] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.077608] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.077635] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.086326] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.086351] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.095657] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.095683] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.104969] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.104995] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.113971] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.113996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.124005] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.124032] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.692 [2024-04-27 00:51:19.132699] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.692 [2024-04-27 00:51:19.132725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.142115] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.142141] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.152027] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.152051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.160860] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.160887] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.170332] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.170358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.179712] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.179739] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.189064] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.189090] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.198347] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.198374] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.206992] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.207017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.215869] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.215896] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.225761] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.225786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.234544] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.234571] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.244320] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.244346] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.253020] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.253046] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.263072] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.263099] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.272512] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.272537] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.281671] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.281698] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.291298] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.291324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.300036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.300062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.309714] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.309741] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.319039] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.319064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.328231] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.328255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.337474] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.337499] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.693 [2024-04-27 00:51:19.346779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.693 [2024-04-27 00:51:19.346804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.355951] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.355979] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.365125] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.365151] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.374475] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.374501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.383682] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.383707] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.393142] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.393170] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.402594] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.402619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.412435] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.412463] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.422189] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.422214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.430777] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.430803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.440754] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.440779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.450455] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.450486] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.458057] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.458082] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.468552] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.468580] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.477805] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.477830] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.487437] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.487462] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.496763] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.496791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.506145] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.506172] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.515432] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.515458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.525237] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.525262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.534441] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.534468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.544310] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.544335] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.553030] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.553056] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.563157] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.563185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.573007] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.573035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.582376] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.582402] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.591624] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.591649] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.601280] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.601309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.953 [2024-04-27 00:51:19.610628] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.953 [2024-04-27 00:51:19.610656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.954 [2024-04-27 00:51:19.620068] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.954 [2024-04-27 00:51:19.620094] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.954 [2024-04-27 00:51:19.629419] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.954 [2024-04-27 00:51:19.629446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.954 [2024-04-27 00:51:19.639422] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.954 [2024-04-27 00:51:19.639449] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:26.954 [2024-04-27 00:51:19.648963] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:26.954 [2024-04-27 00:51:19.648988] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.214 [2024-04-27 00:51:19.657922] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.214 [2024-04-27 00:51:19.657950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.214 [2024-04-27 00:51:19.667905] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.214 [2024-04-27 00:51:19.667930] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.214 [2024-04-27 00:51:19.676537] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.214 [2024-04-27 00:51:19.676563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.685641] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.685666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.695222] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.695249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.704530] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.704554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.714003] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.714029] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.723095] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.723119] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.732251] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.732277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.741375] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.741400] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.750044] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.750069] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.759246] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.759271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.768941] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.768967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.778273] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.778303] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.787526] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.787554] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.796600] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.796626] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.806482] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.806510] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.815649] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.815677] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.825340] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.825365] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.834684] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.834712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.843827] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.843853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.853165] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.853193] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.862477] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.862503] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.871779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.871804] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.881102] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.881130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.889710] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.889735] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.898847] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.898873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.215 [2024-04-27 00:51:19.907936] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.215 [2024-04-27 00:51:19.907962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.917036] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.917062] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.926225] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.926252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.935524] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.935548] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.944929] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.944953] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.954179] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.954210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.962867] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.962892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.971199] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.971233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 00:17:27.475 Latency(us) 00:17:27.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:27.475 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:17:27.475 Nvme1n1 : 5.01 16810.43 131.33 0.00 0.00 7606.61 3328.54 19591.81 00:17:27.475 =================================================================================================================== 00:17:27.475 Total : 16810.43 131.33 0.00 0.00 7606.61 3328.54 19591.81 00:17:27.475 [2024-04-27 00:51:19.977718] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.977744] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.985851] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.985875] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:19.993727] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:19.993745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.001734] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.001751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.009767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.009789] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.017733] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.017748] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.025739] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.025753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.033729] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.033743] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.041741] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.041756] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.475 [2024-04-27 00:51:20.049744] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.475 [2024-04-27 00:51:20.049762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.057750] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.057766] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.065755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.065771] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.073737] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.073751] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.081748] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.081769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.089752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.089767] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.097746] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.097761] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.105762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.105776] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.113752] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.113768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.121760] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.121774] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.129762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.129775] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.137755] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.137769] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.145762] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.145777] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.153764] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.153779] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.161758] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.161772] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.476 [2024-04-27 00:51:20.169767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.476 [2024-04-27 00:51:20.169781] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.177775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.177791] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.185767] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.185782] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.193775] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.193790] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.201779] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.201794] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.209840] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.209854] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.217784] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.217799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.225773] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.225786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.233793] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.233806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.241788] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.241802] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.249785] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.249799] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.257789] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.257803] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.265783] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.265797] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.273799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.273814] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.281802] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.281817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.289799] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.289816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.297823] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.297840] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.305809] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.305824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.313801] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.313815] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.321810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.321824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.329807] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.329821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.337815] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.337829] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.345820] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.345834] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.353810] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.353825] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 [2024-04-27 00:51:20.361831] subsystem.c:1906:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:17:27.737 [2024-04-27 00:51:20.361847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:27.737 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2753467) - No such process 00:17:27.737 00:51:20 -- target/zcopy.sh@49 -- # wait 2753467 00:17:27.737 00:51:20 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:27.737 00:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.737 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:27.737 00:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.737 00:51:20 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:27.737 00:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.737 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:27.737 delay0 00:17:27.737 00:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.737 00:51:20 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:17:27.737 00:51:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:27.737 00:51:20 -- common/autotest_common.sh@10 -- # set +x 00:17:27.737 00:51:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:27.737 00:51:20 -- target/zcopy.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:17:27.996 EAL: No free 2048 kB hugepages reported on node 1 00:17:27.996 [2024-04-27 00:51:20.513479] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:34.564 Initializing NVMe Controllers 00:17:34.564 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:34.564 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:34.564 Initialization complete. Launching workers. 00:17:34.564 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 689 00:17:34.564 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 968, failed to submit 41 00:17:34.564 success 795, unsuccess 173, failed 0 00:17:34.564 00:51:26 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:34.564 00:51:26 -- target/zcopy.sh@60 -- # nvmftestfini 00:17:34.564 00:51:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:34.564 00:51:26 -- nvmf/common.sh@117 -- # sync 00:17:34.564 00:51:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:34.564 00:51:26 -- nvmf/common.sh@120 -- # set +e 00:17:34.564 00:51:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:34.564 00:51:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:34.564 rmmod nvme_tcp 00:17:34.564 rmmod nvme_fabrics 00:17:34.564 rmmod nvme_keyring 00:17:34.564 00:51:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:34.564 00:51:26 -- nvmf/common.sh@124 -- # set -e 00:17:34.564 00:51:26 -- nvmf/common.sh@125 -- # return 0 00:17:34.564 00:51:26 -- nvmf/common.sh@478 -- # '[' -n 2750719 ']' 00:17:34.564 00:51:26 -- nvmf/common.sh@479 -- # killprocess 2750719 00:17:34.564 00:51:26 -- common/autotest_common.sh@936 -- # '[' -z 2750719 ']' 00:17:34.564 00:51:26 -- common/autotest_common.sh@940 -- # kill -0 2750719 00:17:34.564 00:51:26 -- common/autotest_common.sh@941 -- # uname 00:17:34.564 00:51:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:34.564 00:51:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2750719 00:17:34.564 00:51:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:34.564 00:51:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:34.564 00:51:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2750719' 00:17:34.564 killing process with pid 2750719 00:17:34.564 00:51:26 -- common/autotest_common.sh@955 -- # kill 2750719 00:17:34.564 00:51:26 -- common/autotest_common.sh@960 -- # wait 2750719 00:17:34.822 00:51:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:34.822 00:51:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:34.822 00:51:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:34.822 00:51:27 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:34.822 00:51:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:34.822 00:51:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:34.822 00:51:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:34.822 00:51:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.356 00:51:29 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:37.356 00:17:37.356 real 0m33.212s 00:17:37.356 user 0m47.339s 00:17:37.356 sys 0m8.381s 00:17:37.356 00:51:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.356 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:17:37.356 ************************************ 00:17:37.356 END TEST nvmf_zcopy 00:17:37.356 ************************************ 00:17:37.356 00:51:29 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:37.356 00:51:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:37.356 00:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.356 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:17:37.356 ************************************ 00:17:37.356 START TEST nvmf_nmic 00:17:37.356 ************************************ 00:17:37.356 00:51:29 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:37.356 * Looking for test storage... 00:17:37.356 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:37.356 00:51:29 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:37.356 00:51:29 -- nvmf/common.sh@7 -- # uname -s 00:17:37.356 00:51:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:37.356 00:51:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:37.356 00:51:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:37.356 00:51:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:37.356 00:51:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:37.356 00:51:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:37.356 00:51:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:37.356 00:51:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:37.356 00:51:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:37.356 00:51:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:37.356 00:51:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:17:37.356 00:51:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:17:37.356 00:51:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:37.356 00:51:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:37.356 00:51:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:37.356 00:51:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:37.356 00:51:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:37.356 00:51:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:37.356 00:51:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:37.356 00:51:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:37.356 00:51:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.356 00:51:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.356 00:51:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.356 00:51:29 -- paths/export.sh@5 -- # export PATH 00:17:37.357 00:51:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:37.357 00:51:29 -- nvmf/common.sh@47 -- # : 0 00:17:37.357 00:51:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:37.357 00:51:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:37.357 00:51:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:37.357 00:51:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:37.357 00:51:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:37.357 00:51:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:37.357 00:51:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:37.357 00:51:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:37.357 00:51:29 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:37.357 00:51:29 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:37.357 00:51:29 -- target/nmic.sh@14 -- # nvmftestinit 00:17:37.357 00:51:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:37.357 00:51:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:37.357 00:51:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:37.357 00:51:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:37.357 00:51:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:37.357 00:51:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:37.357 00:51:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:37.357 00:51:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:37.357 00:51:29 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:37.357 00:51:29 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:37.357 00:51:29 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:37.357 00:51:29 -- common/autotest_common.sh@10 -- # set +x 00:17:42.630 00:51:34 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:42.630 00:51:34 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:42.630 00:51:34 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:42.630 00:51:34 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:42.630 00:51:34 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:42.630 00:51:34 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:42.630 00:51:34 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:42.630 00:51:34 -- nvmf/common.sh@295 -- # net_devs=() 00:17:42.630 00:51:34 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:42.630 00:51:34 -- nvmf/common.sh@296 -- # e810=() 00:17:42.630 00:51:34 -- nvmf/common.sh@296 -- # local -ga e810 00:17:42.630 00:51:34 -- nvmf/common.sh@297 -- # x722=() 00:17:42.630 00:51:34 -- nvmf/common.sh@297 -- # local -ga x722 00:17:42.630 00:51:34 -- nvmf/common.sh@298 -- # mlx=() 00:17:42.630 00:51:34 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:42.630 00:51:34 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:42.630 00:51:34 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:42.630 00:51:34 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.630 00:51:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:42.630 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:42.630 00:51:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:42.630 00:51:34 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:42.630 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:42.630 00:51:34 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.630 00:51:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.630 00:51:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.630 00:51:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:42.630 Found net devices under 0000:27:00.0: cvl_0_0 00:17:42.630 00:51:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.630 00:51:34 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:42.630 00:51:34 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:42.630 00:51:34 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:42.630 00:51:34 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:42.630 Found net devices under 0000:27:00.1: cvl_0_1 00:17:42.630 00:51:34 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:42.630 00:51:34 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:42.630 00:51:34 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:42.630 00:51:34 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:42.630 00:51:34 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:42.630 00:51:34 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:42.630 00:51:34 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:42.630 00:51:34 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:42.630 00:51:34 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:42.630 00:51:34 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:42.630 00:51:34 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:42.630 00:51:34 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:42.630 00:51:34 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:42.630 00:51:34 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:42.630 00:51:34 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:42.630 00:51:34 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:42.630 00:51:34 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:42.630 00:51:34 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:42.630 00:51:34 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:42.630 00:51:34 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:42.630 00:51:34 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:42.630 00:51:34 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:42.630 00:51:34 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:42.630 00:51:34 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:42.630 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:42.630 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.447 ms 00:17:42.630 00:17:42.630 --- 10.0.0.2 ping statistics --- 00:17:42.630 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.630 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:17:42.630 00:51:34 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:42.630 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:42.630 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:17:42.630 00:17:42.630 --- 10.0.0.1 ping statistics --- 00:17:42.631 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:42.631 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:17:42.631 00:51:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:42.631 00:51:35 -- nvmf/common.sh@411 -- # return 0 00:17:42.631 00:51:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:42.631 00:51:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:42.631 00:51:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:42.631 00:51:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:42.631 00:51:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:42.631 00:51:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:42.631 00:51:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:42.631 00:51:35 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:42.631 00:51:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:42.631 00:51:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:42.631 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:42.631 00:51:35 -- nvmf/common.sh@470 -- # nvmfpid=2759826 00:17:42.631 00:51:35 -- nvmf/common.sh@471 -- # waitforlisten 2759826 00:17:42.631 00:51:35 -- common/autotest_common.sh@817 -- # '[' -z 2759826 ']' 00:17:42.631 00:51:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:42.631 00:51:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.631 00:51:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:42.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:42.631 00:51:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.631 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:42.631 00:51:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:42.631 [2024-04-27 00:51:35.104877] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:42.631 [2024-04-27 00:51:35.104978] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:42.631 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.631 [2024-04-27 00:51:35.225928] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:42.631 [2024-04-27 00:51:35.320073] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:42.631 [2024-04-27 00:51:35.320108] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:42.631 [2024-04-27 00:51:35.320119] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:42.631 [2024-04-27 00:51:35.320128] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:42.631 [2024-04-27 00:51:35.320135] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:42.631 [2024-04-27 00:51:35.320263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.631 [2024-04-27 00:51:35.320268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.631 [2024-04-27 00:51:35.320336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.631 [2024-04-27 00:51:35.320347] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:43.200 00:51:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:43.200 00:51:35 -- common/autotest_common.sh@850 -- # return 0 00:17:43.201 00:51:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:43.201 00:51:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:43.201 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.201 00:51:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:43.201 00:51:35 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:43.201 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.201 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.201 [2024-04-27 00:51:35.864580] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:43.201 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.201 00:51:35 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:43.201 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.201 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 Malloc0 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 [2024-04-27 00:51:35.929795] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:43.461 test case1: single bdev can't be used in multiple subsystems 00:17:43.461 00:51:35 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@28 -- # nmic_status=0 00:17:43.461 00:51:35 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 [2024-04-27 00:51:35.953643] bdev.c:8005:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:43.461 [2024-04-27 00:51:35.953673] subsystem.c:1940:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:43.461 [2024-04-27 00:51:35.953686] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:43.461 request: 00:17:43.461 { 00:17:43.461 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:43.461 "namespace": { 00:17:43.461 "bdev_name": "Malloc0", 00:17:43.461 "no_auto_visible": false 00:17:43.461 }, 00:17:43.461 "method": "nvmf_subsystem_add_ns", 00:17:43.461 "req_id": 1 00:17:43.461 } 00:17:43.461 Got JSON-RPC error response 00:17:43.461 response: 00:17:43.461 { 00:17:43.461 "code": -32602, 00:17:43.461 "message": "Invalid parameters" 00:17:43.461 } 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@29 -- # nmic_status=1 00:17:43.461 00:51:35 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:43.461 00:51:35 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:43.461 Adding namespace failed - expected result. 00:17:43.461 00:51:35 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:43.461 test case2: host connect to nvmf target in multiple paths 00:17:43.461 00:51:35 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:43.461 00:51:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:43.461 00:51:35 -- common/autotest_common.sh@10 -- # set +x 00:17:43.461 [2024-04-27 00:51:35.961788] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:43.461 00:51:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:43.461 00:51:35 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:44.837 00:51:37 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:46.744 00:51:38 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:46.744 00:51:38 -- common/autotest_common.sh@1184 -- # local i=0 00:17:46.744 00:51:38 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:17:46.744 00:51:38 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:17:46.744 00:51:38 -- common/autotest_common.sh@1191 -- # sleep 2 00:17:48.651 00:51:40 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:17:48.651 00:51:40 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:17:48.651 00:51:40 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:17:48.651 00:51:40 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:17:48.651 00:51:40 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:17:48.651 00:51:40 -- common/autotest_common.sh@1194 -- # return 0 00:17:48.651 00:51:40 -- target/nmic.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:48.651 [global] 00:17:48.651 thread=1 00:17:48.651 invalidate=1 00:17:48.651 rw=write 00:17:48.651 time_based=1 00:17:48.651 runtime=1 00:17:48.651 ioengine=libaio 00:17:48.651 direct=1 00:17:48.651 bs=4096 00:17:48.651 iodepth=1 00:17:48.651 norandommap=0 00:17:48.651 numjobs=1 00:17:48.651 00:17:48.651 verify_dump=1 00:17:48.651 verify_backlog=512 00:17:48.651 verify_state_save=0 00:17:48.651 do_verify=1 00:17:48.651 verify=crc32c-intel 00:17:48.651 [job0] 00:17:48.651 filename=/dev/nvme0n1 00:17:48.651 Could not set queue depth (nvme0n1) 00:17:48.651 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:48.651 fio-3.35 00:17:48.651 Starting 1 thread 00:17:50.027 00:17:50.027 job0: (groupid=0, jobs=1): err= 0: pid=2761208: Sat Apr 27 00:51:42 2024 00:17:50.027 read: IOPS=21, BW=86.9KiB/s (89.0kB/s)(88.0KiB/1013msec) 00:17:50.027 slat (nsec): min=5122, max=37996, avg=26955.50, stdev=9807.67 00:17:50.027 clat (usec): min=561, max=41496, avg=39129.48, stdev=8615.30 00:17:50.027 lat (usec): min=581, max=41501, avg=39156.44, stdev=8616.87 00:17:50.027 clat percentiles (usec): 00:17:50.027 | 1.00th=[ 562], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:17:50.027 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:50.027 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:17:50.027 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:17:50.027 | 99.99th=[41681] 00:17:50.027 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:17:50.027 slat (usec): min=6, max=25066, avg=65.10, stdev=1107.18 00:17:50.027 clat (usec): min=119, max=659, avg=227.78, stdev=66.93 00:17:50.027 lat (usec): min=126, max=25598, avg=292.88, stdev=1122.98 00:17:50.027 clat percentiles (usec): 00:17:50.027 | 1.00th=[ 122], 5.00th=[ 128], 10.00th=[ 174], 20.00th=[ 202], 00:17:50.027 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 210], 60.00th=[ 215], 00:17:50.027 | 70.00th=[ 221], 80.00th=[ 258], 90.00th=[ 326], 95.00th=[ 363], 00:17:50.027 | 99.00th=[ 461], 99.50th=[ 510], 99.90th=[ 660], 99.95th=[ 660], 00:17:50.027 | 99.99th=[ 660] 00:17:50.027 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:50.027 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:50.027 lat (usec) : 250=75.47%, 500=19.85%, 750=0.75% 00:17:50.027 lat (msec) : 50=3.93% 00:17:50.027 cpu : usr=0.59%, sys=0.89%, ctx=536, majf=0, minf=1 00:17:50.027 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:50.027 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.027 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:50.027 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:50.027 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:50.027 00:17:50.027 Run status group 0 (all jobs): 00:17:50.027 READ: bw=86.9KiB/s (89.0kB/s), 86.9KiB/s-86.9KiB/s (89.0kB/s-89.0kB/s), io=88.0KiB (90.1kB), run=1013-1013msec 00:17:50.027 WRITE: bw=2022KiB/s (2070kB/s), 2022KiB/s-2022KiB/s (2070kB/s-2070kB/s), io=2048KiB (2097kB), run=1013-1013msec 00:17:50.027 00:17:50.027 Disk stats (read/write): 00:17:50.027 nvme0n1: ios=45/512, merge=0/0, ticks=1724/105, in_queue=1829, util=98.60% 00:17:50.027 00:51:42 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.289 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:50.289 00:51:42 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:50.289 00:51:42 -- common/autotest_common.sh@1205 -- # local i=0 00:17:50.289 00:51:42 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:17:50.289 00:51:42 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.289 00:51:42 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:50.289 00:51:42 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:17:50.289 00:51:42 -- common/autotest_common.sh@1217 -- # return 0 00:17:50.289 00:51:42 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:50.289 00:51:42 -- target/nmic.sh@53 -- # nvmftestfini 00:17:50.289 00:51:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:17:50.289 00:51:42 -- nvmf/common.sh@117 -- # sync 00:17:50.289 00:51:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:50.289 00:51:42 -- nvmf/common.sh@120 -- # set +e 00:17:50.289 00:51:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:50.289 00:51:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:50.289 rmmod nvme_tcp 00:17:50.289 rmmod nvme_fabrics 00:17:50.289 rmmod nvme_keyring 00:17:50.289 00:51:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:50.289 00:51:42 -- nvmf/common.sh@124 -- # set -e 00:17:50.289 00:51:42 -- nvmf/common.sh@125 -- # return 0 00:17:50.289 00:51:42 -- nvmf/common.sh@478 -- # '[' -n 2759826 ']' 00:17:50.289 00:51:42 -- nvmf/common.sh@479 -- # killprocess 2759826 00:17:50.289 00:51:42 -- common/autotest_common.sh@936 -- # '[' -z 2759826 ']' 00:17:50.289 00:51:42 -- common/autotest_common.sh@940 -- # kill -0 2759826 00:17:50.289 00:51:42 -- common/autotest_common.sh@941 -- # uname 00:17:50.289 00:51:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:50.289 00:51:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2759826 00:17:50.289 00:51:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:50.289 00:51:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:50.289 00:51:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2759826' 00:17:50.289 killing process with pid 2759826 00:17:50.289 00:51:42 -- common/autotest_common.sh@955 -- # kill 2759826 00:17:50.289 00:51:42 -- common/autotest_common.sh@960 -- # wait 2759826 00:17:50.857 00:51:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:17:50.857 00:51:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:17:50.857 00:51:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:17:50.857 00:51:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:50.857 00:51:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:50.857 00:51:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.857 00:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:50.857 00:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.399 00:51:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:53.399 00:17:53.399 real 0m15.971s 00:17:53.399 user 0m47.831s 00:17:53.399 sys 0m4.630s 00:17:53.399 00:51:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:53.399 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:53.399 ************************************ 00:17:53.399 END TEST nvmf_nmic 00:17:53.399 ************************************ 00:17:53.400 00:51:45 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:53.400 00:51:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:53.400 00:51:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:53.400 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:53.400 ************************************ 00:17:53.400 START TEST nvmf_fio_target 00:17:53.400 ************************************ 00:17:53.400 00:51:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:53.400 * Looking for test storage... 00:17:53.400 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:17:53.400 00:51:45 -- target/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:17:53.400 00:51:45 -- nvmf/common.sh@7 -- # uname -s 00:17:53.400 00:51:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:53.400 00:51:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:53.400 00:51:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:53.400 00:51:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:53.400 00:51:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:53.400 00:51:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:53.400 00:51:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:53.400 00:51:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:53.400 00:51:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:53.400 00:51:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:53.400 00:51:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:17:53.400 00:51:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:17:53.400 00:51:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:53.400 00:51:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:53.400 00:51:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:53.400 00:51:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:53.400 00:51:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:17:53.400 00:51:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:53.400 00:51:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:53.400 00:51:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:53.400 00:51:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.400 00:51:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.400 00:51:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.400 00:51:45 -- paths/export.sh@5 -- # export PATH 00:17:53.400 00:51:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:53.400 00:51:45 -- nvmf/common.sh@47 -- # : 0 00:17:53.400 00:51:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:53.400 00:51:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:53.400 00:51:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:53.400 00:51:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:53.400 00:51:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:53.400 00:51:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:53.400 00:51:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:53.400 00:51:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:53.400 00:51:45 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:53.400 00:51:45 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:53.400 00:51:45 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:17:53.400 00:51:45 -- target/fio.sh@16 -- # nvmftestinit 00:17:53.400 00:51:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:17:53.400 00:51:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:53.400 00:51:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:17:53.400 00:51:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:17:53.400 00:51:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:17:53.400 00:51:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.400 00:51:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.400 00:51:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:53.400 00:51:45 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:17:53.400 00:51:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:17:53.400 00:51:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:17:53.400 00:51:45 -- common/autotest_common.sh@10 -- # set +x 00:17:58.676 00:51:50 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:58.677 00:51:50 -- nvmf/common.sh@291 -- # pci_devs=() 00:17:58.677 00:51:50 -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:58.677 00:51:50 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:58.677 00:51:50 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:58.677 00:51:50 -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:58.677 00:51:50 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:58.677 00:51:50 -- nvmf/common.sh@295 -- # net_devs=() 00:17:58.677 00:51:50 -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:58.677 00:51:50 -- nvmf/common.sh@296 -- # e810=() 00:17:58.677 00:51:50 -- nvmf/common.sh@296 -- # local -ga e810 00:17:58.677 00:51:50 -- nvmf/common.sh@297 -- # x722=() 00:17:58.677 00:51:50 -- nvmf/common.sh@297 -- # local -ga x722 00:17:58.677 00:51:50 -- nvmf/common.sh@298 -- # mlx=() 00:17:58.677 00:51:50 -- nvmf/common.sh@298 -- # local -ga mlx 00:17:58.677 00:51:50 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.677 00:51:50 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:58.677 00:51:50 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.677 00:51:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:17:58.677 Found 0000:27:00.0 (0x8086 - 0x159b) 00:17:58.677 00:51:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:58.677 00:51:50 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:17:58.677 Found 0000:27:00.1 (0x8086 - 0x159b) 00:17:58.677 00:51:50 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.677 00:51:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.677 00:51:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.677 00:51:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:17:58.677 Found net devices under 0000:27:00.0: cvl_0_0 00:17:58.677 00:51:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.677 00:51:50 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:58.677 00:51:50 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.677 00:51:50 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.677 00:51:50 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:17:58.677 Found net devices under 0000:27:00.1: cvl_0_1 00:17:58.677 00:51:50 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.677 00:51:50 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@403 -- # is_hw=yes 00:17:58.677 00:51:50 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:17:58.677 00:51:50 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:17:58.677 00:51:50 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:58.677 00:51:50 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:58.677 00:51:50 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:58.677 00:51:50 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:58.677 00:51:50 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:58.677 00:51:50 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:58.677 00:51:50 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:58.677 00:51:50 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:58.677 00:51:50 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:58.677 00:51:50 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:58.677 00:51:50 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:58.677 00:51:50 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:58.677 00:51:50 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:58.677 00:51:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:58.677 00:51:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:58.677 00:51:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:58.677 00:51:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:58.677 00:51:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:58.677 00:51:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:58.677 00:51:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:58.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:58.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:17:58.677 00:17:58.677 --- 10.0.0.2 ping statistics --- 00:17:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.677 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:58.677 00:51:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:58.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:58.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.049 ms 00:17:58.677 00:17:58.677 --- 10.0.0.1 ping statistics --- 00:17:58.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:58.677 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:17:58.677 00:51:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:58.677 00:51:51 -- nvmf/common.sh@411 -- # return 0 00:17:58.677 00:51:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:17:58.677 00:51:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:58.677 00:51:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:17:58.677 00:51:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:17:58.677 00:51:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:58.677 00:51:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:17:58.677 00:51:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:17:58.677 00:51:51 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:58.677 00:51:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:17:58.677 00:51:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:58.677 00:51:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.677 00:51:51 -- nvmf/common.sh@470 -- # nvmfpid=2765506 00:17:58.677 00:51:51 -- nvmf/common.sh@471 -- # waitforlisten 2765506 00:17:58.677 00:51:51 -- common/autotest_common.sh@817 -- # '[' -z 2765506 ']' 00:17:58.677 00:51:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.677 00:51:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:58.677 00:51:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.678 00:51:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:58.678 00:51:51 -- common/autotest_common.sh@10 -- # set +x 00:17:58.678 00:51:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:58.678 [2024-04-27 00:51:51.207938] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:17:58.678 [2024-04-27 00:51:51.208038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.678 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.678 [2024-04-27 00:51:51.328427] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.935 [2024-04-27 00:51:51.426983] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.936 [2024-04-27 00:51:51.427022] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.936 [2024-04-27 00:51:51.427034] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.936 [2024-04-27 00:51:51.427045] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.936 [2024-04-27 00:51:51.427053] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.936 [2024-04-27 00:51:51.427165] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.936 [2024-04-27 00:51:51.427182] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.936 [2024-04-27 00:51:51.427297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.936 [2024-04-27 00:51:51.427306] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:59.504 00:51:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:59.504 00:51:51 -- common/autotest_common.sh@850 -- # return 0 00:17:59.504 00:51:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:17:59.504 00:51:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:59.504 00:51:51 -- common/autotest_common.sh@10 -- # set +x 00:17:59.504 00:51:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.504 00:51:51 -- target/fio.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.504 [2024-04-27 00:51:52.063866] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.504 00:51:52 -- target/fio.sh@21 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.763 00:51:52 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:59.763 00:51:52 -- target/fio.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:59.763 00:51:52 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:59.763 00:51:52 -- target/fio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.021 00:51:52 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:00.021 00:51:52 -- target/fio.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.278 00:51:52 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:00.278 00:51:52 -- target/fio.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:00.606 00:51:52 -- target/fio.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.606 00:51:53 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:00.607 00:51:53 -- target/fio.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.864 00:51:53 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:00.864 00:51:53 -- target/fio.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:00.864 00:51:53 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:00.864 00:51:53 -- target/fio.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:01.124 00:51:53 -- target/fio.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:01.124 00:51:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:01.124 00:51:53 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:01.385 00:51:53 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:01.385 00:51:53 -- target/fio.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:01.385 00:51:54 -- target/fio.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:01.645 [2024-04-27 00:51:54.198756] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:01.645 00:51:54 -- target/fio.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:01.906 00:51:54 -- target/fio.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:01.906 00:51:54 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:03.282 00:51:55 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:03.282 00:51:55 -- common/autotest_common.sh@1184 -- # local i=0 00:18:03.282 00:51:55 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.282 00:51:55 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:18:03.282 00:51:55 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:18:03.282 00:51:55 -- common/autotest_common.sh@1191 -- # sleep 2 00:18:05.819 00:51:57 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:18:05.819 00:51:57 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:18:05.819 00:51:57 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.819 00:51:57 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:18:05.819 00:51:57 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.819 00:51:57 -- common/autotest_common.sh@1194 -- # return 0 00:18:05.819 00:51:57 -- target/fio.sh@50 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:05.819 [global] 00:18:05.819 thread=1 00:18:05.819 invalidate=1 00:18:05.819 rw=write 00:18:05.819 time_based=1 00:18:05.819 runtime=1 00:18:05.819 ioengine=libaio 00:18:05.819 direct=1 00:18:05.819 bs=4096 00:18:05.819 iodepth=1 00:18:05.819 norandommap=0 00:18:05.819 numjobs=1 00:18:05.819 00:18:05.819 verify_dump=1 00:18:05.819 verify_backlog=512 00:18:05.819 verify_state_save=0 00:18:05.819 do_verify=1 00:18:05.819 verify=crc32c-intel 00:18:05.819 [job0] 00:18:05.819 filename=/dev/nvme0n1 00:18:05.819 [job1] 00:18:05.819 filename=/dev/nvme0n2 00:18:05.819 [job2] 00:18:05.819 filename=/dev/nvme0n3 00:18:05.819 [job3] 00:18:05.819 filename=/dev/nvme0n4 00:18:05.819 Could not set queue depth (nvme0n1) 00:18:05.819 Could not set queue depth (nvme0n2) 00:18:05.819 Could not set queue depth (nvme0n3) 00:18:05.819 Could not set queue depth (nvme0n4) 00:18:05.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.819 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.819 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.819 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:05.819 fio-3.35 00:18:05.819 Starting 4 threads 00:18:07.218 00:18:07.218 job0: (groupid=0, jobs=1): err= 0: pid=2767144: Sat Apr 27 00:51:59 2024 00:18:07.218 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.218 slat (nsec): min=4947, max=53279, avg=14919.09, stdev=10822.69 00:18:07.218 clat (usec): min=150, max=639, avg=322.04, stdev=83.41 00:18:07.218 lat (usec): min=156, max=645, avg=336.96, stdev=89.37 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 194], 5.00th=[ 217], 10.00th=[ 233], 20.00th=[ 251], 00:18:07.218 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 302], 60.00th=[ 334], 00:18:07.218 | 70.00th=[ 363], 80.00th=[ 392], 90.00th=[ 449], 95.00th=[ 486], 00:18:07.218 | 99.00th=[ 529], 99.50th=[ 553], 99.90th=[ 586], 99.95th=[ 644], 00:18:07.218 | 99.99th=[ 644] 00:18:07.218 write: IOPS=1898, BW=7592KiB/s (7775kB/s)(7600KiB/1001msec); 0 zone resets 00:18:07.218 slat (nsec): min=6704, max=82710, avg=18291.26, stdev=13273.29 00:18:07.218 clat (usec): min=104, max=593, avg=226.84, stdev=78.87 00:18:07.218 lat (usec): min=114, max=629, avg=245.13, stdev=86.30 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 116], 5.00th=[ 131], 10.00th=[ 143], 20.00th=[ 165], 00:18:07.218 | 30.00th=[ 176], 40.00th=[ 188], 50.00th=[ 200], 60.00th=[ 233], 00:18:07.218 | 70.00th=[ 255], 80.00th=[ 293], 90.00th=[ 330], 95.00th=[ 379], 00:18:07.218 | 99.00th=[ 461], 99.50th=[ 482], 99.90th=[ 586], 99.95th=[ 594], 00:18:07.218 | 99.99th=[ 594] 00:18:07.218 bw ( KiB/s): min= 8192, max= 8192, per=26.20%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.218 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.218 lat (usec) : 250=46.16%, 500=52.18%, 750=1.66% 00:18:07.218 cpu : usr=3.40%, sys=4.80%, ctx=3438, majf=0, minf=1 00:18:07.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 issued rwts: total=1536,1900,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.218 job1: (groupid=0, jobs=1): err= 0: pid=2767145: Sat Apr 27 00:51:59 2024 00:18:07.218 read: IOPS=1641, BW=6567KiB/s (6724kB/s)(6580KiB/1002msec) 00:18:07.218 slat (nsec): min=4944, max=48899, avg=9395.35, stdev=8196.69 00:18:07.218 clat (usec): min=194, max=1039, avg=334.86, stdev=130.78 00:18:07.218 lat (usec): min=199, max=1069, avg=344.25, stdev=136.89 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 210], 5.00th=[ 231], 10.00th=[ 245], 20.00th=[ 262], 00:18:07.218 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 289], 60.00th=[ 306], 00:18:07.218 | 70.00th=[ 322], 80.00th=[ 355], 90.00th=[ 498], 95.00th=[ 676], 00:18:07.218 | 99.00th=[ 840], 99.50th=[ 898], 99.90th=[ 971], 99.95th=[ 1037], 00:18:07.218 | 99.99th=[ 1037] 00:18:07.218 write: IOPS=2043, BW=8176KiB/s (8372kB/s)(8192KiB/1002msec); 0 zone resets 00:18:07.218 slat (nsec): min=6330, max=70876, avg=10547.97, stdev=7492.70 00:18:07.218 clat (usec): min=117, max=668, avg=196.70, stdev=56.74 00:18:07.218 lat (usec): min=126, max=679, avg=207.25, stdev=59.41 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 127], 5.00th=[ 139], 10.00th=[ 147], 20.00th=[ 159], 00:18:07.218 | 30.00th=[ 169], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 194], 00:18:07.218 | 70.00th=[ 204], 80.00th=[ 223], 90.00th=[ 249], 95.00th=[ 310], 00:18:07.218 | 99.00th=[ 437], 99.50th=[ 486], 99.90th=[ 537], 99.95th=[ 611], 00:18:07.218 | 99.99th=[ 668] 00:18:07.218 bw ( KiB/s): min= 8192, max= 8192, per=26.20%, avg=8192.00, stdev= 0.00, samples=2 00:18:07.218 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:18:07.218 lat (usec) : 250=55.86%, 500=39.56%, 750=3.63%, 1000=0.92% 00:18:07.218 lat (msec) : 2=0.03% 00:18:07.218 cpu : usr=2.20%, sys=5.00%, ctx=3695, majf=0, minf=1 00:18:07.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 issued rwts: total=1645,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.218 job2: (groupid=0, jobs=1): err= 0: pid=2767146: Sat Apr 27 00:51:59 2024 00:18:07.218 read: IOPS=1999, BW=7996KiB/s (8188kB/s)(8004KiB/1001msec) 00:18:07.218 slat (nsec): min=4557, max=63089, avg=8582.29, stdev=7542.67 00:18:07.218 clat (usec): min=191, max=848, avg=294.64, stdev=107.57 00:18:07.218 lat (usec): min=197, max=879, avg=303.22, stdev=113.09 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 208], 5.00th=[ 221], 10.00th=[ 227], 20.00th=[ 237], 00:18:07.218 | 30.00th=[ 245], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 273], 00:18:07.218 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 445], 95.00th=[ 570], 00:18:07.218 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 816], 99.95th=[ 816], 00:18:07.218 | 99.99th=[ 848] 00:18:07.218 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:07.218 slat (nsec): min=6220, max=63960, avg=9558.20, stdev=6645.47 00:18:07.218 clat (usec): min=102, max=727, avg=178.83, stdev=42.48 00:18:07.218 lat (usec): min=130, max=791, avg=188.39, stdev=45.36 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 143], 20.00th=[ 153], 00:18:07.218 | 30.00th=[ 159], 40.00th=[ 167], 50.00th=[ 174], 60.00th=[ 180], 00:18:07.218 | 70.00th=[ 186], 80.00th=[ 194], 90.00th=[ 210], 95.00th=[ 243], 00:18:07.218 | 99.00th=[ 375], 99.50th=[ 396], 99.90th=[ 529], 99.95th=[ 537], 00:18:07.218 | 99.99th=[ 725] 00:18:07.218 bw ( KiB/s): min= 8192, max= 8192, per=26.20%, avg=8192.00, stdev= 0.00, samples=1 00:18:07.218 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:07.218 lat (usec) : 250=65.87%, 500=30.28%, 750=3.58%, 1000=0.27% 00:18:07.218 cpu : usr=1.60%, sys=5.40%, ctx=4049, majf=0, minf=1 00:18:07.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 issued rwts: total=2001,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.218 job3: (groupid=0, jobs=1): err= 0: pid=2767147: Sat Apr 27 00:51:59 2024 00:18:07.218 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:07.218 slat (nsec): min=5210, max=48820, avg=16478.31, stdev=12287.81 00:18:07.218 clat (usec): min=194, max=828, avg=323.02, stdev=109.69 00:18:07.218 lat (usec): min=201, max=859, avg=339.50, stdev=117.34 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 233], 20.00th=[ 243], 00:18:07.218 | 30.00th=[ 255], 40.00th=[ 265], 50.00th=[ 281], 60.00th=[ 318], 00:18:07.218 | 70.00th=[ 351], 80.00th=[ 379], 90.00th=[ 461], 95.00th=[ 578], 00:18:07.218 | 99.00th=[ 725], 99.50th=[ 750], 99.90th=[ 807], 99.95th=[ 832], 00:18:07.218 | 99.99th=[ 832] 00:18:07.218 write: IOPS=1833, BW=7333KiB/s (7509kB/s)(7340KiB/1001msec); 0 zone resets 00:18:07.218 slat (nsec): min=6891, max=75391, avg=19394.63, stdev=14903.77 00:18:07.218 clat (usec): min=124, max=541, avg=232.63, stdev=67.28 00:18:07.218 lat (usec): min=132, max=578, avg=252.02, stdev=76.12 00:18:07.218 clat percentiles (usec): 00:18:07.218 | 1.00th=[ 139], 5.00th=[ 153], 10.00th=[ 161], 20.00th=[ 174], 00:18:07.218 | 30.00th=[ 182], 40.00th=[ 196], 50.00th=[ 225], 60.00th=[ 245], 00:18:07.218 | 70.00th=[ 262], 80.00th=[ 285], 90.00th=[ 322], 95.00th=[ 367], 00:18:07.218 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 490], 99.95th=[ 545], 00:18:07.218 | 99.99th=[ 545] 00:18:07.218 bw ( KiB/s): min= 7688, max= 7688, per=24.59%, avg=7688.00, stdev= 0.00, samples=1 00:18:07.218 iops : min= 1922, max= 1922, avg=1922.00, stdev= 0.00, samples=1 00:18:07.218 lat (usec) : 250=46.84%, 500=49.75%, 750=3.17%, 1000=0.24% 00:18:07.218 cpu : usr=3.40%, sys=8.70%, ctx=3372, majf=0, minf=1 00:18:07.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:07.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:07.218 issued rwts: total=1536,1835,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:07.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:07.218 00:18:07.218 Run status group 0 (all jobs): 00:18:07.218 READ: bw=26.2MiB/s (27.5MB/s), 6138KiB/s-7996KiB/s (6285kB/s-8188kB/s), io=26.2MiB (27.5MB), run=1001-1002msec 00:18:07.218 WRITE: bw=30.5MiB/s (32.0MB/s), 7333KiB/s-8184KiB/s (7509kB/s-8380kB/s), io=30.6MiB (32.1MB), run=1001-1002msec 00:18:07.218 00:18:07.218 Disk stats (read/write): 00:18:07.218 nvme0n1: ios=1332/1536, merge=0/0, ticks=647/344, in_queue=991, util=83.47% 00:18:07.218 nvme0n2: ios=1443/1536, merge=0/0, ticks=916/297, in_queue=1213, util=89.25% 00:18:07.218 nvme0n3: ios=1593/1740, merge=0/0, ticks=554/298, in_queue=852, util=93.51% 00:18:07.218 nvme0n4: ios=1263/1536, merge=0/0, ticks=1239/305, in_queue=1544, util=94.07% 00:18:07.218 00:51:59 -- target/fio.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:07.218 [global] 00:18:07.218 thread=1 00:18:07.218 invalidate=1 00:18:07.218 rw=randwrite 00:18:07.218 time_based=1 00:18:07.218 runtime=1 00:18:07.218 ioengine=libaio 00:18:07.218 direct=1 00:18:07.218 bs=4096 00:18:07.218 iodepth=1 00:18:07.218 norandommap=0 00:18:07.218 numjobs=1 00:18:07.218 00:18:07.218 verify_dump=1 00:18:07.218 verify_backlog=512 00:18:07.218 verify_state_save=0 00:18:07.218 do_verify=1 00:18:07.218 verify=crc32c-intel 00:18:07.218 [job0] 00:18:07.218 filename=/dev/nvme0n1 00:18:07.218 [job1] 00:18:07.218 filename=/dev/nvme0n2 00:18:07.218 [job2] 00:18:07.218 filename=/dev/nvme0n3 00:18:07.218 [job3] 00:18:07.218 filename=/dev/nvme0n4 00:18:07.218 Could not set queue depth (nvme0n1) 00:18:07.218 Could not set queue depth (nvme0n2) 00:18:07.218 Could not set queue depth (nvme0n3) 00:18:07.218 Could not set queue depth (nvme0n4) 00:18:07.481 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.481 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.481 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.481 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:07.481 fio-3.35 00:18:07.482 Starting 4 threads 00:18:08.874 00:18:08.874 job0: (groupid=0, jobs=1): err= 0: pid=2767609: Sat Apr 27 00:52:01 2024 00:18:08.874 read: IOPS=1862, BW=7449KiB/s (7627kB/s)(7456KiB/1001msec) 00:18:08.874 slat (nsec): min=5002, max=62101, avg=12058.66, stdev=10595.01 00:18:08.874 clat (usec): min=183, max=684, avg=291.57, stdev=80.74 00:18:08.874 lat (usec): min=190, max=715, avg=303.62, stdev=86.54 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 235], 00:18:08.874 | 30.00th=[ 245], 40.00th=[ 253], 50.00th=[ 262], 60.00th=[ 277], 00:18:08.874 | 70.00th=[ 306], 80.00th=[ 343], 90.00th=[ 408], 95.00th=[ 465], 00:18:08.874 | 99.00th=[ 594], 99.50th=[ 644], 99.90th=[ 685], 99.95th=[ 685], 00:18:08.874 | 99.99th=[ 685] 00:18:08.874 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:18:08.874 slat (nsec): min=6113, max=67975, avg=11006.25, stdev=8760.47 00:18:08.874 clat (usec): min=110, max=636, avg=195.86, stdev=76.12 00:18:08.874 lat (usec): min=118, max=704, avg=206.87, stdev=80.87 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 121], 5.00th=[ 129], 10.00th=[ 135], 20.00th=[ 149], 00:18:08.874 | 30.00th=[ 159], 40.00th=[ 163], 50.00th=[ 169], 60.00th=[ 176], 00:18:08.874 | 70.00th=[ 188], 80.00th=[ 241], 90.00th=[ 293], 95.00th=[ 375], 00:18:08.874 | 99.00th=[ 490], 99.50th=[ 510], 99.90th=[ 594], 99.95th=[ 627], 00:18:08.874 | 99.99th=[ 635] 00:18:08.874 bw ( KiB/s): min= 8192, max= 8192, per=39.17%, avg=8192.00, stdev= 0.00, samples=1 00:18:08.874 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:08.874 lat (usec) : 250=61.20%, 500=37.12%, 750=1.69% 00:18:08.874 cpu : usr=2.40%, sys=6.50%, ctx=3913, majf=0, minf=1 00:18:08.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 issued rwts: total=1864,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.874 job1: (groupid=0, jobs=1): err= 0: pid=2767610: Sat Apr 27 00:52:01 2024 00:18:08.874 read: IOPS=66, BW=264KiB/s (270kB/s)(272KiB/1030msec) 00:18:08.874 slat (nsec): min=6471, max=44875, avg=20004.00, stdev=16158.06 00:18:08.874 clat (usec): min=381, max=42520, avg=13352.08, stdev=19259.35 00:18:08.874 lat (usec): min=389, max=42527, avg=13372.09, stdev=19266.92 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 383], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 523], 00:18:08.874 | 30.00th=[ 553], 40.00th=[ 578], 50.00th=[ 594], 60.00th=[ 693], 00:18:08.874 | 70.00th=[40633], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:18:08.874 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:08.874 | 99.99th=[42730] 00:18:08.874 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:18:08.874 slat (nsec): min=6700, max=64515, avg=9163.50, stdev=4049.59 00:18:08.874 clat (usec): min=143, max=699, avg=223.97, stdev=41.01 00:18:08.874 lat (usec): min=151, max=713, avg=233.13, stdev=43.46 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 167], 5.00th=[ 184], 10.00th=[ 194], 20.00th=[ 202], 00:18:08.874 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:18:08.874 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 251], 95.00th=[ 265], 00:18:08.874 | 99.00th=[ 396], 99.50th=[ 529], 99.90th=[ 701], 99.95th=[ 701], 00:18:08.874 | 99.99th=[ 701] 00:18:08.874 bw ( KiB/s): min= 4096, max= 4096, per=19.58%, avg=4096.00, stdev= 0.00, samples=1 00:18:08.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:08.874 lat (usec) : 250=78.28%, 500=11.03%, 750=6.55%, 1000=0.52% 00:18:08.874 lat (msec) : 50=3.62% 00:18:08.874 cpu : usr=0.10%, sys=0.97%, ctx=580, majf=0, minf=1 00:18:08.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 issued rwts: total=68,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.874 job2: (groupid=0, jobs=1): err= 0: pid=2767616: Sat Apr 27 00:52:01 2024 00:18:08.874 read: IOPS=2048, BW=8192KiB/s (8389kB/s)(8192KiB/1000msec) 00:18:08.874 slat (nsec): min=2918, max=54951, avg=5462.35, stdev=2035.09 00:18:08.874 clat (usec): min=194, max=752, avg=277.93, stdev=47.10 00:18:08.874 lat (usec): min=198, max=756, avg=283.39, stdev=47.47 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 217], 5.00th=[ 231], 10.00th=[ 241], 20.00th=[ 251], 00:18:08.874 | 30.00th=[ 258], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 277], 00:18:08.874 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 310], 95.00th=[ 330], 00:18:08.874 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[ 603], 99.95th=[ 619], 00:18:08.874 | 99.99th=[ 750] 00:18:08.874 write: IOPS=2366, BW=9464KiB/s (9691kB/s)(9464KiB/1000msec); 0 zone resets 00:18:08.874 slat (nsec): min=3961, max=58239, avg=6330.82, stdev=2032.30 00:18:08.874 clat (usec): min=110, max=483, avg=167.53, stdev=23.15 00:18:08.874 lat (usec): min=115, max=506, avg=173.86, stdev=23.91 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 124], 5.00th=[ 133], 10.00th=[ 139], 20.00th=[ 149], 00:18:08.874 | 30.00th=[ 159], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:18:08.874 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 200], 00:18:08.874 | 99.00th=[ 227], 99.50th=[ 241], 99.90th=[ 375], 99.95th=[ 429], 00:18:08.874 | 99.99th=[ 486] 00:18:08.874 bw ( KiB/s): min=10456, max=10456, per=49.99%, avg=10456.00, stdev= 0.00, samples=1 00:18:08.874 iops : min= 2614, max= 2614, avg=2614.00, stdev= 0.00, samples=1 00:18:08.874 lat (usec) : 250=62.64%, 500=37.09%, 750=0.25%, 1000=0.02% 00:18:08.874 cpu : usr=1.20%, sys=3.20%, ctx=4419, majf=0, minf=1 00:18:08.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 issued rwts: total=2048,2366,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.874 job3: (groupid=0, jobs=1): err= 0: pid=2767618: Sat Apr 27 00:52:01 2024 00:18:08.874 read: IOPS=21, BW=84.6KiB/s (86.6kB/s)(88.0KiB/1040msec) 00:18:08.874 slat (nsec): min=8446, max=34715, avg=32028.82, stdev=5291.19 00:18:08.874 clat (usec): min=40843, max=42073, avg=41853.53, stdev=326.98 00:18:08.874 lat (usec): min=40876, max=42107, avg=41885.56, stdev=328.04 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:18:08.874 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:08.874 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:08.874 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:08.874 | 99.99th=[42206] 00:18:08.874 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:18:08.874 slat (nsec): min=5965, max=55851, avg=7988.26, stdev=2580.14 00:18:08.874 clat (usec): min=148, max=494, avg=220.79, stdev=29.61 00:18:08.874 lat (usec): min=156, max=550, avg=228.78, stdev=30.53 00:18:08.874 clat percentiles (usec): 00:18:08.874 | 1.00th=[ 161], 5.00th=[ 180], 10.00th=[ 190], 20.00th=[ 200], 00:18:08.874 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 225], 00:18:08.874 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 249], 95.00th=[ 265], 00:18:08.874 | 99.00th=[ 310], 99.50th=[ 338], 99.90th=[ 494], 99.95th=[ 494], 00:18:08.874 | 99.99th=[ 494] 00:18:08.874 bw ( KiB/s): min= 4096, max= 4096, per=19.58%, avg=4096.00, stdev= 0.00, samples=1 00:18:08.874 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:08.874 lat (usec) : 250=86.33%, 500=9.55% 00:18:08.874 lat (msec) : 50=4.12% 00:18:08.874 cpu : usr=0.10%, sys=0.77%, ctx=535, majf=0, minf=1 00:18:08.874 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:08.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:08.874 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:08.874 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:08.874 00:18:08.874 Run status group 0 (all jobs): 00:18:08.874 READ: bw=15.0MiB/s (15.8MB/s), 84.6KiB/s-8192KiB/s (86.6kB/s-8389kB/s), io=15.6MiB (16.4MB), run=1000-1040msec 00:18:08.874 WRITE: bw=20.4MiB/s (21.4MB/s), 1969KiB/s-9464KiB/s (2016kB/s-9691kB/s), io=21.2MiB (22.3MB), run=1000-1040msec 00:18:08.874 00:18:08.874 Disk stats (read/write): 00:18:08.874 nvme0n1: ios=1586/1619, merge=0/0, ticks=635/307, in_queue=942, util=88.18% 00:18:08.874 nvme0n2: ios=98/512, merge=0/0, ticks=970/108, in_queue=1078, util=94.07% 00:18:08.874 nvme0n3: ios=1692/2048, merge=0/0, ticks=1268/332, in_queue=1600, util=98.09% 00:18:08.874 nvme0n4: ios=38/512, merge=0/0, ticks=1594/109, in_queue=1703, util=95.91% 00:18:08.875 00:52:01 -- target/fio.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:08.875 [global] 00:18:08.875 thread=1 00:18:08.875 invalidate=1 00:18:08.875 rw=write 00:18:08.875 time_based=1 00:18:08.875 runtime=1 00:18:08.875 ioengine=libaio 00:18:08.875 direct=1 00:18:08.875 bs=4096 00:18:08.875 iodepth=128 00:18:08.875 norandommap=0 00:18:08.875 numjobs=1 00:18:08.875 00:18:08.875 verify_dump=1 00:18:08.875 verify_backlog=512 00:18:08.875 verify_state_save=0 00:18:08.875 do_verify=1 00:18:08.875 verify=crc32c-intel 00:18:08.875 [job0] 00:18:08.875 filename=/dev/nvme0n1 00:18:08.875 [job1] 00:18:08.875 filename=/dev/nvme0n2 00:18:08.875 [job2] 00:18:08.875 filename=/dev/nvme0n3 00:18:08.875 [job3] 00:18:08.875 filename=/dev/nvme0n4 00:18:08.875 Could not set queue depth (nvme0n1) 00:18:08.875 Could not set queue depth (nvme0n2) 00:18:08.875 Could not set queue depth (nvme0n3) 00:18:08.875 Could not set queue depth (nvme0n4) 00:18:09.139 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.139 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.139 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.139 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:09.139 fio-3.35 00:18:09.139 Starting 4 threads 00:18:10.527 00:18:10.527 job0: (groupid=0, jobs=1): err= 0: pid=2768088: Sat Apr 27 00:52:02 2024 00:18:10.527 read: IOPS=5073, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1006msec) 00:18:10.527 slat (nsec): min=876, max=18312k, avg=99277.98, stdev=766451.87 00:18:10.527 clat (usec): min=3792, max=36804, avg=12470.60, stdev=4945.55 00:18:10.527 lat (usec): min=3798, max=44360, avg=12569.87, stdev=4992.73 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 4817], 5.00th=[ 7701], 10.00th=[ 8979], 20.00th=[ 9896], 00:18:10.527 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10683], 60.00th=[11469], 00:18:10.527 | 70.00th=[12256], 80.00th=[14484], 90.00th=[18220], 95.00th=[23200], 00:18:10.527 | 99.00th=[34341], 99.50th=[34341], 99.90th=[34866], 99.95th=[34866], 00:18:10.527 | 99.99th=[36963] 00:18:10.527 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:18:10.527 slat (nsec): min=1725, max=25793k, avg=87783.40, stdev=609087.03 00:18:10.527 clat (usec): min=2449, max=59372, avg=12485.42, stdev=8225.74 00:18:10.527 lat (usec): min=2456, max=59375, avg=12573.20, stdev=8287.29 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 3294], 5.00th=[ 5276], 10.00th=[ 7439], 20.00th=[ 9634], 00:18:10.527 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:18:10.527 | 70.00th=[11207], 80.00th=[11469], 90.00th=[20317], 95.00th=[27395], 00:18:10.527 | 99.00th=[58459], 99.50th=[58983], 99.90th=[59507], 99.95th=[59507], 00:18:10.527 | 99.99th=[59507] 00:18:10.527 bw ( KiB/s): min=17008, max=23952, per=30.29%, avg=20480.00, stdev=4910.15, samples=2 00:18:10.527 iops : min= 4252, max= 5988, avg=5120.00, stdev=1227.54, samples=2 00:18:10.527 lat (msec) : 4=1.18%, 10=21.70%, 20=68.35%, 50=7.84%, 100=0.92% 00:18:10.527 cpu : usr=1.99%, sys=6.67%, ctx=557, majf=0, minf=1 00:18:10.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:10.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.527 issued rwts: total=5104,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.527 job1: (groupid=0, jobs=1): err= 0: pid=2768089: Sat Apr 27 00:52:02 2024 00:18:10.527 read: IOPS=2643, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1010msec) 00:18:10.527 slat (nsec): min=1025, max=13088k, avg=138980.11, stdev=903194.83 00:18:10.527 clat (usec): min=6223, max=53754, avg=15570.36, stdev=6251.06 00:18:10.527 lat (usec): min=6228, max=53761, avg=15709.34, stdev=6328.53 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 7046], 5.00th=[10814], 10.00th=[11338], 20.00th=[11863], 00:18:10.527 | 30.00th=[12125], 40.00th=[12518], 50.00th=[14091], 60.00th=[14353], 00:18:10.527 | 70.00th=[14746], 80.00th=[17433], 90.00th=[22676], 95.00th=[28967], 00:18:10.527 | 99.00th=[44303], 99.50th=[45351], 99.90th=[53740], 99.95th=[53740], 00:18:10.527 | 99.99th=[53740] 00:18:10.527 write: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec); 0 zone resets 00:18:10.527 slat (nsec): min=1885, max=15872k, avg=194982.14, stdev=1021503.82 00:18:10.527 clat (msec): min=2, max=129, avg=27.64, stdev=22.04 00:18:10.527 lat (msec): min=2, max=129, avg=27.83, stdev=22.19 00:18:10.527 clat percentiles (msec): 00:18:10.527 | 1.00th=[ 4], 5.00th=[ 8], 10.00th=[ 10], 20.00th=[ 14], 00:18:10.527 | 30.00th=[ 17], 40.00th=[ 20], 50.00th=[ 23], 60.00th=[ 27], 00:18:10.527 | 70.00th=[ 31], 80.00th=[ 34], 90.00th=[ 43], 95.00th=[ 85], 00:18:10.527 | 99.00th=[ 118], 99.50th=[ 121], 99.90th=[ 130], 99.95th=[ 130], 00:18:10.527 | 99.99th=[ 130] 00:18:10.527 bw ( KiB/s): min=12144, max=12288, per=18.07%, avg=12216.00, stdev=101.82, samples=2 00:18:10.527 iops : min= 3036, max= 3072, avg=3054.00, stdev=25.46, samples=2 00:18:10.527 lat (msec) : 4=0.68%, 10=7.92%, 20=53.13%, 50=34.00%, 100=2.47% 00:18:10.527 lat (msec) : 250=1.79% 00:18:10.527 cpu : usr=1.78%, sys=3.57%, ctx=337, majf=0, minf=1 00:18:10.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:18:10.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.527 issued rwts: total=2670,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.527 job2: (groupid=0, jobs=1): err= 0: pid=2768090: Sat Apr 27 00:52:02 2024 00:18:10.527 read: IOPS=3540, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:18:10.527 slat (nsec): min=970, max=13670k, avg=128683.80, stdev=781814.35 00:18:10.527 clat (usec): min=5799, max=44292, avg=15853.03, stdev=4893.62 00:18:10.527 lat (usec): min=8503, max=44298, avg=15981.71, stdev=4960.43 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 9372], 5.00th=[11076], 10.00th=[11600], 20.00th=[11994], 00:18:10.527 | 30.00th=[12256], 40.00th=[13173], 50.00th=[14484], 60.00th=[17171], 00:18:10.527 | 70.00th=[17957], 80.00th=[18220], 90.00th=[20579], 95.00th=[24773], 00:18:10.527 | 99.00th=[40109], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:10.527 | 99.99th=[44303] 00:18:10.527 write: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec); 0 zone resets 00:18:10.527 slat (nsec): min=1805, max=23052k, avg=148193.13, stdev=979143.61 00:18:10.527 clat (usec): min=8285, max=70700, avg=19487.29, stdev=12521.46 00:18:10.527 lat (usec): min=8289, max=70734, avg=19635.49, stdev=12616.61 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[10552], 5.00th=[11600], 10.00th=[11731], 20.00th=[11863], 00:18:10.527 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13173], 60.00th=[15270], 00:18:10.527 | 70.00th=[18220], 80.00th=[23462], 90.00th=[41681], 95.00th=[51119], 00:18:10.527 | 99.00th=[59507], 99.50th=[59507], 99.90th=[60031], 99.95th=[66847], 00:18:10.527 | 99.99th=[70779] 00:18:10.527 bw ( KiB/s): min=11288, max=17384, per=21.21%, avg=14336.00, stdev=4310.52, samples=2 00:18:10.527 iops : min= 2822, max= 4346, avg=3584.00, stdev=1077.63, samples=2 00:18:10.527 lat (msec) : 10=1.24%, 20=79.02%, 50=16.84%, 100=2.89% 00:18:10.527 cpu : usr=1.88%, sys=2.38%, ctx=343, majf=0, minf=1 00:18:10.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:18:10.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.527 issued rwts: total=3576,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.527 job3: (groupid=0, jobs=1): err= 0: pid=2768091: Sat Apr 27 00:52:02 2024 00:18:10.527 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:18:10.527 slat (nsec): min=959, max=10991k, avg=97914.47, stdev=564892.61 00:18:10.527 clat (usec): min=4635, max=23669, avg=12297.11, stdev=1848.00 00:18:10.527 lat (usec): min=4641, max=23718, avg=12395.03, stdev=1897.55 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 7832], 5.00th=[ 9503], 10.00th=[10683], 20.00th=[11338], 00:18:10.527 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:18:10.527 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14353], 95.00th=[15401], 00:18:10.527 | 99.00th=[19268], 99.50th=[20579], 99.90th=[21890], 99.95th=[22676], 00:18:10.527 | 99.99th=[23725] 00:18:10.527 write: IOPS=5267, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1005msec); 0 zone resets 00:18:10.527 slat (nsec): min=1498, max=6177.5k, avg=90462.20, stdev=453079.45 00:18:10.527 clat (usec): min=730, max=23113, avg=12191.01, stdev=2493.74 00:18:10.527 lat (usec): min=1219, max=23119, avg=12281.48, stdev=2514.93 00:18:10.527 clat percentiles (usec): 00:18:10.527 | 1.00th=[ 3982], 5.00th=[ 7635], 10.00th=[10290], 20.00th=[11469], 00:18:10.527 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12125], 60.00th=[12387], 00:18:10.527 | 70.00th=[12518], 80.00th=[12780], 90.00th=[13960], 95.00th=[17171], 00:18:10.527 | 99.00th=[19792], 99.50th=[21627], 99.90th=[23200], 99.95th=[23200], 00:18:10.527 | 99.99th=[23200] 00:18:10.527 bw ( KiB/s): min=20480, max=20848, per=30.57%, avg=20664.00, stdev=260.22, samples=2 00:18:10.527 iops : min= 5120, max= 5212, avg=5166.00, stdev=65.05, samples=2 00:18:10.527 lat (usec) : 750=0.01% 00:18:10.527 lat (msec) : 2=0.26%, 4=0.29%, 10=7.17%, 20=91.45%, 50=0.82% 00:18:10.527 cpu : usr=3.78%, sys=5.08%, ctx=602, majf=0, minf=1 00:18:10.527 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:10.527 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:10.527 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:10.527 issued rwts: total=5120,5294,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:10.527 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:10.527 00:18:10.527 Run status group 0 (all jobs): 00:18:10.527 READ: bw=63.7MiB/s (66.8MB/s), 10.3MiB/s-19.9MiB/s (10.8MB/s-20.9MB/s), io=64.3MiB (67.5MB), run=1005-1010msec 00:18:10.527 WRITE: bw=66.0MiB/s (69.2MB/s), 11.9MiB/s-20.6MiB/s (12.5MB/s-21.6MB/s), io=66.7MiB (69.9MB), run=1005-1010msec 00:18:10.527 00:18:10.527 Disk stats (read/write): 00:18:10.528 nvme0n1: ios=4146/4407, merge=0/0, ticks=50537/55143, in_queue=105680, util=92.79% 00:18:10.528 nvme0n2: ios=2193/2560, merge=0/0, ticks=34361/71088, in_queue=105449, util=96.96% 00:18:10.528 nvme0n3: ios=3101/3319, merge=0/0, ticks=16698/19078, in_queue=35776, util=97.21% 00:18:10.528 nvme0n4: ios=4319/4608, merge=0/0, ticks=29433/31210, in_queue=60643, util=96.26% 00:18:10.528 00:52:02 -- target/fio.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:18:10.528 [global] 00:18:10.528 thread=1 00:18:10.528 invalidate=1 00:18:10.528 rw=randwrite 00:18:10.528 time_based=1 00:18:10.528 runtime=1 00:18:10.528 ioengine=libaio 00:18:10.528 direct=1 00:18:10.528 bs=4096 00:18:10.528 iodepth=128 00:18:10.528 norandommap=0 00:18:10.528 numjobs=1 00:18:10.528 00:18:10.528 verify_dump=1 00:18:10.528 verify_backlog=512 00:18:10.528 verify_state_save=0 00:18:10.528 do_verify=1 00:18:10.528 verify=crc32c-intel 00:18:10.528 [job0] 00:18:10.528 filename=/dev/nvme0n1 00:18:10.528 [job1] 00:18:10.528 filename=/dev/nvme0n2 00:18:10.528 [job2] 00:18:10.528 filename=/dev/nvme0n3 00:18:10.528 [job3] 00:18:10.528 filename=/dev/nvme0n4 00:18:10.528 Could not set queue depth (nvme0n1) 00:18:10.528 Could not set queue depth (nvme0n2) 00:18:10.528 Could not set queue depth (nvme0n3) 00:18:10.528 Could not set queue depth (nvme0n4) 00:18:10.785 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.785 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.785 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.785 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:10.785 fio-3.35 00:18:10.785 Starting 4 threads 00:18:12.172 00:18:12.172 job0: (groupid=0, jobs=1): err= 0: pid=2768563: Sat Apr 27 00:52:04 2024 00:18:12.172 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:18:12.172 slat (nsec): min=790, max=11180k, avg=100569.37, stdev=518292.44 00:18:12.172 clat (usec): min=8431, max=29763, avg=12997.03, stdev=2688.67 00:18:12.172 lat (usec): min=8554, max=29767, avg=13097.60, stdev=2668.41 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11207], 20.00th=[11731], 00:18:12.172 | 30.00th=[12125], 40.00th=[12387], 50.00th=[12518], 60.00th=[12780], 00:18:12.172 | 70.00th=[13173], 80.00th=[13566], 90.00th=[14091], 95.00th=[15533], 00:18:12.172 | 99.00th=[28181], 99.50th=[28967], 99.90th=[29754], 99.95th=[29754], 00:18:12.172 | 99.99th=[29754] 00:18:12.172 write: IOPS=5009, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1003msec); 0 zone resets 00:18:12.172 slat (nsec): min=1439, max=12486k, avg=103678.99, stdev=559770.74 00:18:12.172 clat (usec): min=637, max=38345, avg=13283.96, stdev=4458.61 00:18:12.172 lat (usec): min=3579, max=38352, avg=13387.64, stdev=4460.80 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 6587], 5.00th=[10159], 10.00th=[11207], 20.00th=[11731], 00:18:12.172 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:18:12.172 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[22152], 00:18:12.172 | 99.00th=[32637], 99.50th=[38536], 99.90th=[38536], 99.95th=[38536], 00:18:12.172 | 99.99th=[38536] 00:18:12.172 bw ( KiB/s): min=18984, max=20192, per=25.06%, avg=19588.00, stdev=854.18, samples=2 00:18:12.172 iops : min= 4746, max= 5048, avg=4897.00, stdev=213.55, samples=2 00:18:12.172 lat (usec) : 750=0.01% 00:18:12.172 lat (msec) : 4=0.33%, 10=3.53%, 20=90.89%, 50=5.24% 00:18:12.172 cpu : usr=2.30%, sys=3.99%, ctx=497, majf=0, minf=1 00:18:12.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:12.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.172 issued rwts: total=4608,5025,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.172 job1: (groupid=0, jobs=1): err= 0: pid=2768564: Sat Apr 27 00:52:04 2024 00:18:12.172 read: IOPS=5069, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1010msec) 00:18:12.172 slat (nsec): min=848, max=22613k, avg=100109.52, stdev=811906.69 00:18:12.172 clat (usec): min=4022, max=32045, avg=13037.19, stdev=4130.85 00:18:12.172 lat (usec): min=4027, max=32051, avg=13137.30, stdev=4175.43 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 4621], 5.00th=[ 7635], 10.00th=[ 9372], 20.00th=[10945], 00:18:12.172 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:18:12.172 | 70.00th=[12911], 80.00th=[15008], 90.00th=[19530], 95.00th=[21365], 00:18:12.172 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27395], 99.95th=[27395], 00:18:12.172 | 99.99th=[32113] 00:18:12.172 write: IOPS=5225, BW=20.4MiB/s (21.4MB/s)(20.6MiB/1010msec); 0 zone resets 00:18:12.172 slat (nsec): min=1615, max=18021k, avg=67293.71, stdev=377120.65 00:18:12.172 clat (usec): min=642, max=34634, avg=11611.18, stdev=4192.16 00:18:12.172 lat (usec): min=648, max=34663, avg=11678.48, stdev=4220.35 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 2507], 5.00th=[ 4359], 10.00th=[ 5735], 20.00th=[ 9241], 00:18:12.172 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11994], 60.00th=[12256], 00:18:12.172 | 70.00th=[12518], 80.00th=[12780], 90.00th=[15926], 95.00th=[20841], 00:18:12.172 | 99.00th=[26346], 99.50th=[26346], 99.90th=[26346], 99.95th=[27395], 00:18:12.172 | 99.99th=[34866] 00:18:12.172 bw ( KiB/s): min=20480, max=20728, per=26.36%, avg=20604.00, stdev=175.36, samples=2 00:18:12.172 iops : min= 5120, max= 5182, avg=5151.00, stdev=43.84, samples=2 00:18:12.172 lat (usec) : 750=0.08% 00:18:12.172 lat (msec) : 2=0.25%, 4=1.91%, 10=17.51%, 20=73.44%, 50=6.81% 00:18:12.172 cpu : usr=1.78%, sys=5.05%, ctx=670, majf=0, minf=1 00:18:12.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:18:12.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.172 issued rwts: total=5120,5278,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.172 job2: (groupid=0, jobs=1): err= 0: pid=2768565: Sat Apr 27 00:52:04 2024 00:18:12.172 read: IOPS=4051, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1011msec) 00:18:12.172 slat (nsec): min=942, max=25429k, avg=135625.12, stdev=1055075.70 00:18:12.172 clat (usec): min=2758, max=63776, avg=16251.19, stdev=8129.54 00:18:12.172 lat (usec): min=4236, max=63785, avg=16386.82, stdev=8199.98 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 6325], 5.00th=[11338], 10.00th=[12518], 20.00th=[13042], 00:18:12.172 | 30.00th=[13173], 40.00th=[13304], 50.00th=[13435], 60.00th=[13829], 00:18:12.172 | 70.00th=[15008], 80.00th=[18482], 90.00th=[23200], 95.00th=[26870], 00:18:12.172 | 99.00th=[60031], 99.50th=[61080], 99.90th=[63701], 99.95th=[63701], 00:18:12.172 | 99.99th=[63701] 00:18:12.172 write: IOPS=4476, BW=17.5MiB/s (18.3MB/s)(17.7MiB/1011msec); 0 zone resets 00:18:12.172 slat (nsec): min=1550, max=16649k, avg=94205.43, stdev=525730.67 00:18:12.172 clat (usec): min=945, max=63745, avg=13622.17, stdev=4675.95 00:18:12.172 lat (usec): min=952, max=63749, avg=13716.37, stdev=4710.66 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 3490], 5.00th=[ 6194], 10.00th=[ 8586], 20.00th=[11207], 00:18:12.172 | 30.00th=[12518], 40.00th=[13042], 50.00th=[13435], 60.00th=[13566], 00:18:12.172 | 70.00th=[13829], 80.00th=[14353], 90.00th=[20579], 95.00th=[22152], 00:18:12.172 | 99.00th=[26346], 99.50th=[31065], 99.90th=[39584], 99.95th=[39584], 00:18:12.172 | 99.99th=[63701] 00:18:12.172 bw ( KiB/s): min=16384, max=18808, per=22.51%, avg=17596.00, stdev=1714.03, samples=2 00:18:12.172 iops : min= 4096, max= 4702, avg=4399.00, stdev=428.51, samples=2 00:18:12.172 lat (usec) : 1000=0.03% 00:18:12.172 lat (msec) : 4=0.79%, 10=7.77%, 20=78.37%, 50=11.84%, 100=1.19% 00:18:12.172 cpu : usr=2.77%, sys=3.76%, ctx=513, majf=0, minf=1 00:18:12.172 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:18:12.172 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.172 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.172 issued rwts: total=4096,4526,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.172 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.172 job3: (groupid=0, jobs=1): err= 0: pid=2768566: Sat Apr 27 00:52:04 2024 00:18:12.172 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:18:12.172 slat (nsec): min=959, max=11307k, avg=99204.29, stdev=586862.15 00:18:12.172 clat (usec): min=3185, max=31131, avg=13671.30, stdev=3146.12 00:18:12.172 lat (usec): min=3188, max=34845, avg=13770.51, stdev=3174.48 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 6390], 5.00th=[ 8848], 10.00th=[10421], 20.00th=[11731], 00:18:12.172 | 30.00th=[12518], 40.00th=[13173], 50.00th=[13566], 60.00th=[14091], 00:18:12.172 | 70.00th=[14615], 80.00th=[15008], 90.00th=[16581], 95.00th=[18220], 00:18:12.172 | 99.00th=[24249], 99.50th=[27919], 99.90th=[28443], 99.95th=[28443], 00:18:12.172 | 99.99th=[31065] 00:18:12.172 write: IOPS=4910, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1004msec); 0 zone resets 00:18:12.172 slat (nsec): min=1587, max=19146k, avg=94122.79, stdev=646071.29 00:18:12.172 clat (usec): min=492, max=30352, avg=13056.83, stdev=3389.19 00:18:12.172 lat (usec): min=790, max=35191, avg=13150.95, stdev=3426.27 00:18:12.172 clat percentiles (usec): 00:18:12.172 | 1.00th=[ 3884], 5.00th=[ 7373], 10.00th=[ 9896], 20.00th=[11600], 00:18:12.172 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13173], 60.00th=[13435], 00:18:12.172 | 70.00th=[13566], 80.00th=[13960], 90.00th=[15926], 95.00th=[20841], 00:18:12.172 | 99.00th=[24511], 99.50th=[26608], 99.90th=[27395], 99.95th=[27395], 00:18:12.172 | 99.99th=[30278] 00:18:12.172 bw ( KiB/s): min=17936, max=20480, per=24.57%, avg=19208.00, stdev=1798.88, samples=2 00:18:12.172 iops : min= 4484, max= 5120, avg=4802.00, stdev=449.72, samples=2 00:18:12.172 lat (usec) : 500=0.01%, 1000=0.03% 00:18:12.172 lat (msec) : 2=0.13%, 4=0.67%, 10=9.82%, 20=84.66%, 50=4.68% 00:18:12.172 cpu : usr=1.89%, sys=3.39%, ctx=463, majf=0, minf=1 00:18:12.173 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:18:12.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:12.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:12.173 issued rwts: total=4608,4930,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:12.173 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:12.173 00:18:12.173 Run status group 0 (all jobs): 00:18:12.173 READ: bw=71.2MiB/s (74.7MB/s), 15.8MiB/s-19.8MiB/s (16.6MB/s-20.8MB/s), io=72.0MiB (75.5MB), run=1003-1011msec 00:18:12.173 WRITE: bw=76.3MiB/s (80.1MB/s), 17.5MiB/s-20.4MiB/s (18.3MB/s-21.4MB/s), io=77.2MiB (80.9MB), run=1003-1011msec 00:18:12.173 00:18:12.173 Disk stats (read/write): 00:18:12.173 nvme0n1: ios=3953/4096, merge=0/0, ticks=12629/13547, in_queue=26176, util=85.87% 00:18:12.173 nvme0n2: ios=4118/4503, merge=0/0, ticks=53084/51480, in_queue=104564, util=96.13% 00:18:12.173 nvme0n3: ios=3379/3584, merge=0/0, ticks=54289/48678, in_queue=102967, util=89.79% 00:18:12.173 nvme0n4: ios=3930/4096, merge=0/0, ticks=28016/29986, in_queue=58002, util=97.02% 00:18:12.173 00:52:04 -- target/fio.sh@55 -- # sync 00:18:12.173 00:52:04 -- target/fio.sh@59 -- # fio_pid=2768858 00:18:12.173 00:52:04 -- target/fio.sh@61 -- # sleep 3 00:18:12.173 00:52:04 -- target/fio.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:18:12.173 [global] 00:18:12.173 thread=1 00:18:12.173 invalidate=1 00:18:12.173 rw=read 00:18:12.173 time_based=1 00:18:12.173 runtime=10 00:18:12.173 ioengine=libaio 00:18:12.173 direct=1 00:18:12.173 bs=4096 00:18:12.173 iodepth=1 00:18:12.173 norandommap=1 00:18:12.173 numjobs=1 00:18:12.173 00:18:12.173 [job0] 00:18:12.173 filename=/dev/nvme0n1 00:18:12.173 [job1] 00:18:12.173 filename=/dev/nvme0n2 00:18:12.173 [job2] 00:18:12.173 filename=/dev/nvme0n3 00:18:12.173 [job3] 00:18:12.173 filename=/dev/nvme0n4 00:18:12.173 Could not set queue depth (nvme0n1) 00:18:12.173 Could not set queue depth (nvme0n2) 00:18:12.173 Could not set queue depth (nvme0n3) 00:18:12.173 Could not set queue depth (nvme0n4) 00:18:12.430 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.430 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.430 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.430 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:12.430 fio-3.35 00:18:12.430 Starting 4 threads 00:18:14.963 00:52:07 -- target/fio.sh@63 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:18:15.222 00:52:07 -- target/fio.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:18:15.222 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=520192, buflen=4096 00:18:15.222 fio: pid=2769038, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.222 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=20058112, buflen=4096 00:18:15.222 fio: pid=2769037, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.222 00:52:07 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.222 00:52:07 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:18:15.481 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=31453184, buflen=4096 00:18:15.481 fio: pid=2769035, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.481 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.482 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:18:15.740 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=319488, buflen=4096 00:18:15.740 fio: pid=2769036, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:18:15.740 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.740 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:18:15.740 00:18:15.740 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2769035: Sat Apr 27 00:52:08 2024 00:18:15.740 read: IOPS=2665, BW=10.4MiB/s (10.9MB/s)(30.0MiB/2881msec) 00:18:15.740 slat (usec): min=4, max=10935, avg=21.87, stdev=195.28 00:18:15.740 clat (usec): min=176, max=42338, avg=350.04, stdev=1452.00 00:18:15.740 lat (usec): min=182, max=42348, avg=371.90, stdev=1466.35 00:18:15.740 clat percentiles (usec): 00:18:15.740 | 1.00th=[ 200], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 231], 00:18:15.740 | 30.00th=[ 247], 40.00th=[ 262], 50.00th=[ 277], 60.00th=[ 310], 00:18:15.740 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 383], 95.00th=[ 433], 00:18:15.740 | 99.00th=[ 586], 99.50th=[ 676], 99.90th=[41157], 99.95th=[42206], 00:18:15.740 | 99.99th=[42206] 00:18:15.740 bw ( KiB/s): min= 2808, max=13696, per=64.25%, avg=10678.40, stdev=4510.28, samples=5 00:18:15.740 iops : min= 702, max= 3424, avg=2669.60, stdev=1127.57, samples=5 00:18:15.740 lat (usec) : 250=32.34%, 500=65.42%, 750=1.94%, 1000=0.08% 00:18:15.740 lat (msec) : 2=0.08%, 50=0.13% 00:18:15.740 cpu : usr=2.36%, sys=6.98%, ctx=7686, majf=0, minf=1 00:18:15.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.740 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.740 issued rwts: total=7680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.740 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2769036: Sat Apr 27 00:52:08 2024 00:18:15.740 read: IOPS=25, BW=101KiB/s (104kB/s)(312KiB/3076msec) 00:18:15.740 slat (usec): min=8, max=10779, avg=178.33, stdev=1208.23 00:18:15.740 clat (usec): min=542, max=42364, avg=39231.65, stdev=10162.95 00:18:15.740 lat (usec): min=552, max=53104, avg=39411.74, stdev=10282.54 00:18:15.740 clat percentiles (usec): 00:18:15.740 | 1.00th=[ 545], 5.00th=[ 709], 10.00th=[41157], 20.00th=[41681], 00:18:15.740 | 30.00th=[41681], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:18:15.740 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:15.740 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:15.740 | 99.99th=[42206] 00:18:15.740 bw ( KiB/s): min= 96, max= 128, per=0.61%, avg=102.40, stdev=14.31, samples=5 00:18:15.740 iops : min= 24, max= 32, avg=25.60, stdev= 3.58, samples=5 00:18:15.740 lat (usec) : 750=5.06%, 1000=1.27% 00:18:15.740 lat (msec) : 50=92.41% 00:18:15.740 cpu : usr=0.16%, sys=0.00%, ctx=81, majf=0, minf=1 00:18:15.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.740 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.740 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.740 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.740 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2769037: Sat Apr 27 00:52:08 2024 00:18:15.740 read: IOPS=1793, BW=7172KiB/s (7345kB/s)(19.1MiB/2731msec) 00:18:15.740 slat (usec): min=5, max=8758, avg=20.00, stdev=125.56 00:18:15.740 clat (usec): min=194, max=42215, avg=534.31, stdev=2965.47 00:18:15.740 lat (usec): min=200, max=50974, avg=554.30, stdev=2994.81 00:18:15.740 clat percentiles (usec): 00:18:15.740 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 241], 20.00th=[ 253], 00:18:15.740 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 297], 60.00th=[ 338], 00:18:15.740 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 441], 95.00th=[ 482], 00:18:15.740 | 99.00th=[ 553], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:18:15.740 | 99.99th=[42206] 00:18:15.740 bw ( KiB/s): min= 96, max=11600, per=47.08%, avg=7825.60, stdev=5101.39, samples=5 00:18:15.740 iops : min= 24, max= 2900, avg=1956.40, stdev=1275.35, samples=5 00:18:15.740 lat (usec) : 250=17.05%, 500=79.79%, 750=2.59%, 1000=0.04% 00:18:15.740 lat (msec) : 50=0.51% 00:18:15.740 cpu : usr=1.25%, sys=5.35%, ctx=4899, majf=0, minf=1 00:18:15.740 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.740 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.741 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.741 issued rwts: total=4898,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.741 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2769038: Sat Apr 27 00:52:08 2024 00:18:15.741 read: IOPS=48, BW=194KiB/s (199kB/s)(508KiB/2613msec) 00:18:15.741 slat (nsec): min=7025, max=48661, avg=39755.98, stdev=10277.08 00:18:15.741 clat (usec): min=403, max=42595, avg=20517.17, stdev=20619.90 00:18:15.741 lat (usec): min=415, max=42602, avg=20556.90, stdev=20621.28 00:18:15.741 clat percentiles (usec): 00:18:15.741 | 1.00th=[ 474], 5.00th=[ 537], 10.00th=[ 734], 20.00th=[ 799], 00:18:15.741 | 30.00th=[ 816], 40.00th=[ 840], 50.00th=[ 881], 60.00th=[41157], 00:18:15.741 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:15.741 | 99.00th=[42206], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:18:15.741 | 99.99th=[42730] 00:18:15.741 bw ( KiB/s): min= 96, max= 264, per=1.19%, avg=198.40, stdev=77.65, samples=5 00:18:15.741 iops : min= 24, max= 66, avg=49.60, stdev=19.41, samples=5 00:18:15.741 lat (usec) : 500=2.34%, 750=7.81%, 1000=40.62% 00:18:15.741 lat (msec) : 2=0.78%, 50=47.66% 00:18:15.741 cpu : usr=0.42%, sys=0.00%, ctx=128, majf=0, minf=2 00:18:15.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:15.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.741 complete : 0=0.8%, 4=99.2%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:15.741 issued rwts: total=128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:15.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:15.741 00:18:15.741 Run status group 0 (all jobs): 00:18:15.741 READ: bw=16.2MiB/s (17.0MB/s), 101KiB/s-10.4MiB/s (104kB/s-10.9MB/s), io=49.9MiB (52.3MB), run=2613-3076msec 00:18:15.741 00:18:15.741 Disk stats (read/write): 00:18:15.741 nvme0n1: ios=7599/0, merge=0/0, ticks=3143/0, in_queue=3143, util=98.30% 00:18:15.741 nvme0n2: ios=96/0, merge=0/0, ticks=2865/0, in_queue=2865, util=96.04% 00:18:15.741 nvme0n3: ios=4893/0, merge=0/0, ticks=2216/0, in_queue=2216, util=96.07% 00:18:15.741 nvme0n4: ios=126/0, merge=0/0, ticks=2569/0, in_queue=2569, util=96.46% 00:18:15.741 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.741 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:18:15.999 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:15.999 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:18:16.258 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.258 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:18:16.258 00:52:08 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:18:16.258 00:52:08 -- target/fio.sh@66 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:18:16.517 00:52:09 -- target/fio.sh@69 -- # fio_status=0 00:18:16.517 00:52:09 -- target/fio.sh@70 -- # wait 2768858 00:18:16.517 00:52:09 -- target/fio.sh@70 -- # fio_status=4 00:18:16.517 00:52:09 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:17.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:17.085 00:52:09 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:17.085 00:52:09 -- common/autotest_common.sh@1205 -- # local i=0 00:18:17.085 00:52:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:18:17.085 00:52:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.086 00:52:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:18:17.086 00:52:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:17.086 00:52:09 -- common/autotest_common.sh@1217 -- # return 0 00:18:17.086 00:52:09 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:18:17.086 00:52:09 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:18:17.086 nvmf hotplug test: fio failed as expected 00:18:17.086 00:52:09 -- target/fio.sh@83 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:17.086 00:52:09 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:18:17.086 00:52:09 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:18:17.086 00:52:09 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:18:17.086 00:52:09 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:18:17.086 00:52:09 -- target/fio.sh@91 -- # nvmftestfini 00:18:17.086 00:52:09 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:17.086 00:52:09 -- nvmf/common.sh@117 -- # sync 00:18:17.086 00:52:09 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:17.086 00:52:09 -- nvmf/common.sh@120 -- # set +e 00:18:17.086 00:52:09 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:17.086 00:52:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:17.086 rmmod nvme_tcp 00:18:17.086 rmmod nvme_fabrics 00:18:17.086 rmmod nvme_keyring 00:18:17.086 00:52:09 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:17.086 00:52:09 -- nvmf/common.sh@124 -- # set -e 00:18:17.086 00:52:09 -- nvmf/common.sh@125 -- # return 0 00:18:17.086 00:52:09 -- nvmf/common.sh@478 -- # '[' -n 2765506 ']' 00:18:17.086 00:52:09 -- nvmf/common.sh@479 -- # killprocess 2765506 00:18:17.086 00:52:09 -- common/autotest_common.sh@936 -- # '[' -z 2765506 ']' 00:18:17.086 00:52:09 -- common/autotest_common.sh@940 -- # kill -0 2765506 00:18:17.086 00:52:09 -- common/autotest_common.sh@941 -- # uname 00:18:17.086 00:52:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:17.086 00:52:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2765506 00:18:17.086 00:52:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:17.086 00:52:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:17.086 00:52:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2765506' 00:18:17.086 killing process with pid 2765506 00:18:17.086 00:52:09 -- common/autotest_common.sh@955 -- # kill 2765506 00:18:17.086 00:52:09 -- common/autotest_common.sh@960 -- # wait 2765506 00:18:17.654 00:52:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:17.654 00:52:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:17.655 00:52:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:17.655 00:52:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:17.655 00:52:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:17.655 00:52:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:17.655 00:52:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:17.655 00:52:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.192 00:52:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:20.192 00:18:20.192 real 0m26.606s 00:18:20.192 user 2m27.213s 00:18:20.192 sys 0m7.665s 00:18:20.192 00:52:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:20.192 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:18:20.192 ************************************ 00:18:20.192 END TEST nvmf_fio_target 00:18:20.192 ************************************ 00:18:20.192 00:52:12 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.192 00:52:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:20.192 00:52:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:20.192 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:18:20.192 ************************************ 00:18:20.192 START TEST nvmf_bdevio 00:18:20.192 ************************************ 00:18:20.192 00:52:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:18:20.192 * Looking for test storage... 00:18:20.192 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:20.192 00:52:12 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:20.192 00:52:12 -- nvmf/common.sh@7 -- # uname -s 00:18:20.192 00:52:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:20.192 00:52:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:20.192 00:52:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:20.192 00:52:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:20.192 00:52:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:20.192 00:52:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:20.192 00:52:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:20.192 00:52:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:20.192 00:52:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:20.192 00:52:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:20.192 00:52:12 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:18:20.192 00:52:12 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:18:20.192 00:52:12 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:20.192 00:52:12 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:20.192 00:52:12 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:20.192 00:52:12 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:20.192 00:52:12 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:20.192 00:52:12 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:20.192 00:52:12 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:20.192 00:52:12 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:20.192 00:52:12 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.192 00:52:12 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.192 00:52:12 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.192 00:52:12 -- paths/export.sh@5 -- # export PATH 00:18:20.192 00:52:12 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:20.192 00:52:12 -- nvmf/common.sh@47 -- # : 0 00:18:20.192 00:52:12 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:20.192 00:52:12 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:20.192 00:52:12 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:20.192 00:52:12 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:20.192 00:52:12 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:20.192 00:52:12 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:20.192 00:52:12 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:20.192 00:52:12 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:20.192 00:52:12 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:20.192 00:52:12 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:20.192 00:52:12 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:20.192 00:52:12 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:20.192 00:52:12 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:20.192 00:52:12 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:20.192 00:52:12 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:20.192 00:52:12 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:20.192 00:52:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:20.192 00:52:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:20.192 00:52:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:20.192 00:52:12 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:20.192 00:52:12 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:20.192 00:52:12 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:20.192 00:52:12 -- common/autotest_common.sh@10 -- # set +x 00:18:26.800 00:52:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:26.800 00:52:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:26.800 00:52:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:26.800 00:52:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:26.800 00:52:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:26.800 00:52:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:26.800 00:52:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:26.800 00:52:18 -- nvmf/common.sh@295 -- # net_devs=() 00:18:26.800 00:52:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:26.800 00:52:18 -- nvmf/common.sh@296 -- # e810=() 00:18:26.800 00:52:18 -- nvmf/common.sh@296 -- # local -ga e810 00:18:26.800 00:52:18 -- nvmf/common.sh@297 -- # x722=() 00:18:26.800 00:52:18 -- nvmf/common.sh@297 -- # local -ga x722 00:18:26.800 00:52:18 -- nvmf/common.sh@298 -- # mlx=() 00:18:26.800 00:52:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:26.800 00:52:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:26.800 00:52:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:26.800 00:52:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.800 00:52:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:26.800 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:26.800 00:52:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:26.800 00:52:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:26.800 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:26.800 00:52:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.800 00:52:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.800 00:52:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.800 00:52:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:26.800 Found net devices under 0000:27:00.0: cvl_0_0 00:18:26.800 00:52:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.800 00:52:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:26.800 00:52:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:26.800 00:52:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:26.800 00:52:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:26.800 Found net devices under 0000:27:00.1: cvl_0_1 00:18:26.800 00:52:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:26.800 00:52:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:26.800 00:52:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:26.800 00:52:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:26.800 00:52:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:26.800 00:52:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:26.800 00:52:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:26.800 00:52:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:26.800 00:52:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:26.800 00:52:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:26.800 00:52:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:26.800 00:52:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:26.800 00:52:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:26.800 00:52:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:26.800 00:52:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:26.800 00:52:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:26.800 00:52:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:26.800 00:52:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:26.800 00:52:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:26.800 00:52:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:26.800 00:52:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:26.800 00:52:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:26.800 00:52:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:26.800 00:52:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:26.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:26.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.737 ms 00:18:26.800 00:18:26.800 --- 10.0.0.2 ping statistics --- 00:18:26.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.800 rtt min/avg/max/mdev = 0.737/0.737/0.737/0.000 ms 00:18:26.800 00:52:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:26.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:26.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.294 ms 00:18:26.800 00:18:26.800 --- 10.0.0.1 ping statistics --- 00:18:26.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:26.800 rtt min/avg/max/mdev = 0.294/0.294/0.294/0.000 ms 00:18:26.800 00:52:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:26.800 00:52:18 -- nvmf/common.sh@411 -- # return 0 00:18:26.801 00:52:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:26.801 00:52:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:26.801 00:52:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:26.801 00:52:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:26.801 00:52:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:26.801 00:52:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:26.801 00:52:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:26.801 00:52:18 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:26.801 00:52:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:26.801 00:52:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:26.801 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:26.801 00:52:18 -- nvmf/common.sh@470 -- # nvmfpid=2774137 00:18:26.801 00:52:18 -- nvmf/common.sh@471 -- # waitforlisten 2774137 00:18:26.801 00:52:18 -- common/autotest_common.sh@817 -- # '[' -z 2774137 ']' 00:18:26.801 00:52:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.801 00:52:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.801 00:52:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.801 00:52:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.801 00:52:18 -- common/autotest_common.sh@10 -- # set +x 00:18:26.801 00:52:18 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:26.801 [2024-04-27 00:52:18.856996] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:26.801 [2024-04-27 00:52:18.857104] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:26.801 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.801 [2024-04-27 00:52:18.983571] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:26.801 [2024-04-27 00:52:19.077937] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.801 [2024-04-27 00:52:19.077982] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.801 [2024-04-27 00:52:19.078001] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.801 [2024-04-27 00:52:19.078017] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.801 [2024-04-27 00:52:19.078029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.801 [2024-04-27 00:52:19.078137] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:26.801 [2024-04-27 00:52:19.078208] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:26.801 [2024-04-27 00:52:19.078195] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:26.801 [2024-04-27 00:52:19.078253] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:27.064 00:52:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.064 00:52:19 -- common/autotest_common.sh@850 -- # return 0 00:18:27.064 00:52:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:27.064 00:52:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 00:52:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.064 00:52:19 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:27.064 00:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 [2024-04-27 00:52:19.584083] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:27.064 00:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.064 00:52:19 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:27.064 00:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 Malloc0 00:18:27.064 00:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.064 00:52:19 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:27.064 00:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 00:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.064 00:52:19 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:27.064 00:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 00:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.064 00:52:19 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:27.064 00:52:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.064 00:52:19 -- common/autotest_common.sh@10 -- # set +x 00:18:27.064 [2024-04-27 00:52:19.653370] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.064 00:52:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.064 00:52:19 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:27.064 00:52:19 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:27.064 00:52:19 -- nvmf/common.sh@521 -- # config=() 00:18:27.064 00:52:19 -- nvmf/common.sh@521 -- # local subsystem config 00:18:27.065 00:52:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:27.065 00:52:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:27.065 { 00:18:27.065 "params": { 00:18:27.065 "name": "Nvme$subsystem", 00:18:27.065 "trtype": "$TEST_TRANSPORT", 00:18:27.065 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:27.065 "adrfam": "ipv4", 00:18:27.065 "trsvcid": "$NVMF_PORT", 00:18:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:27.065 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:27.065 "hdgst": ${hdgst:-false}, 00:18:27.065 "ddgst": ${ddgst:-false} 00:18:27.065 }, 00:18:27.065 "method": "bdev_nvme_attach_controller" 00:18:27.065 } 00:18:27.065 EOF 00:18:27.065 )") 00:18:27.065 00:52:19 -- nvmf/common.sh@543 -- # cat 00:18:27.065 00:52:19 -- nvmf/common.sh@545 -- # jq . 00:18:27.065 00:52:19 -- nvmf/common.sh@546 -- # IFS=, 00:18:27.065 00:52:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:27.065 "params": { 00:18:27.065 "name": "Nvme1", 00:18:27.065 "trtype": "tcp", 00:18:27.065 "traddr": "10.0.0.2", 00:18:27.065 "adrfam": "ipv4", 00:18:27.065 "trsvcid": "4420", 00:18:27.065 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.065 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.065 "hdgst": false, 00:18:27.065 "ddgst": false 00:18:27.065 }, 00:18:27.065 "method": "bdev_nvme_attach_controller" 00:18:27.065 }' 00:18:27.065 [2024-04-27 00:52:19.725497] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:27.066 [2024-04-27 00:52:19.725605] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2774452 ] 00:18:27.337 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.337 [2024-04-27 00:52:19.839295] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:27.337 [2024-04-27 00:52:19.930589] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.337 [2024-04-27 00:52:19.930686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.337 [2024-04-27 00:52:19.930691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:27.598 I/O targets: 00:18:27.598 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:27.598 00:18:27.598 00:18:27.598 CUnit - A unit testing framework for C - Version 2.1-3 00:18:27.598 http://cunit.sourceforge.net/ 00:18:27.598 00:18:27.598 00:18:27.598 Suite: bdevio tests on: Nvme1n1 00:18:27.598 Test: blockdev write read block ...passed 00:18:27.598 Test: blockdev write zeroes read block ...passed 00:18:27.598 Test: blockdev write zeroes read no split ...passed 00:18:27.859 Test: blockdev write zeroes read split ...passed 00:18:27.859 Test: blockdev write zeroes read split partial ...passed 00:18:27.859 Test: blockdev reset ...[2024-04-27 00:52:20.380173] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:27.859 [2024-04-27 00:52:20.380286] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:27.859 [2024-04-27 00:52:20.490269] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:27.859 passed 00:18:27.859 Test: blockdev write read 8 blocks ...passed 00:18:27.859 Test: blockdev write read size > 128k ...passed 00:18:27.859 Test: blockdev write read invalid size ...passed 00:18:27.859 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:27.859 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:27.859 Test: blockdev write read max offset ...passed 00:18:28.117 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:28.117 Test: blockdev writev readv 8 blocks ...passed 00:18:28.117 Test: blockdev writev readv 30 x 1block ...passed 00:18:28.117 Test: blockdev writev readv block ...passed 00:18:28.117 Test: blockdev writev readv size > 128k ...passed 00:18:28.117 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:28.117 Test: blockdev comparev and writev ...[2024-04-27 00:52:20.666680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.666723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.666740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.666750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.667612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:28.118 [2024-04-27 00:52:20.667622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:28.118 passed 00:18:28.118 Test: blockdev nvme passthru rw ...passed 00:18:28.118 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:52:20.750756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.118 [2024-04-27 00:52:20.750784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.750911] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.118 [2024-04-27 00:52:20.750919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.751045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.118 [2024-04-27 00:52:20.751054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:28.118 [2024-04-27 00:52:20.751173] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:28.118 [2024-04-27 00:52:20.751182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:28.118 passed 00:18:28.118 Test: blockdev nvme admin passthru ...passed 00:18:28.118 Test: blockdev copy ...passed 00:18:28.118 00:18:28.118 Run Summary: Type Total Ran Passed Failed Inactive 00:18:28.118 suites 1 1 n/a 0 0 00:18:28.118 tests 23 23 23 0 0 00:18:28.118 asserts 152 152 152 0 n/a 00:18:28.118 00:18:28.118 Elapsed time = 1.271 seconds 00:18:28.684 00:52:21 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.684 00:52:21 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:28.684 00:52:21 -- common/autotest_common.sh@10 -- # set +x 00:18:28.684 00:52:21 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:28.684 00:52:21 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:28.684 00:52:21 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:28.684 00:52:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:28.684 00:52:21 -- nvmf/common.sh@117 -- # sync 00:18:28.684 00:52:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:28.684 00:52:21 -- nvmf/common.sh@120 -- # set +e 00:18:28.684 00:52:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:28.684 00:52:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:28.684 rmmod nvme_tcp 00:18:28.684 rmmod nvme_fabrics 00:18:28.684 rmmod nvme_keyring 00:18:28.684 00:52:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:28.684 00:52:21 -- nvmf/common.sh@124 -- # set -e 00:18:28.684 00:52:21 -- nvmf/common.sh@125 -- # return 0 00:18:28.684 00:52:21 -- nvmf/common.sh@478 -- # '[' -n 2774137 ']' 00:18:28.684 00:52:21 -- nvmf/common.sh@479 -- # killprocess 2774137 00:18:28.684 00:52:21 -- common/autotest_common.sh@936 -- # '[' -z 2774137 ']' 00:18:28.684 00:52:21 -- common/autotest_common.sh@940 -- # kill -0 2774137 00:18:28.684 00:52:21 -- common/autotest_common.sh@941 -- # uname 00:18:28.684 00:52:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:28.684 00:52:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2774137 00:18:28.684 00:52:21 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:28.684 00:52:21 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:28.684 00:52:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2774137' 00:18:28.684 killing process with pid 2774137 00:18:28.684 00:52:21 -- common/autotest_common.sh@955 -- # kill 2774137 00:18:28.684 00:52:21 -- common/autotest_common.sh@960 -- # wait 2774137 00:18:29.253 00:52:21 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:29.253 00:52:21 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:29.253 00:52:21 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:29.253 00:52:21 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:29.253 00:52:21 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:29.253 00:52:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:29.253 00:52:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:29.253 00:52:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.221 00:52:23 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:31.221 00:18:31.221 real 0m11.414s 00:18:31.221 user 0m14.860s 00:18:31.221 sys 0m5.340s 00:18:31.221 00:52:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.221 00:52:23 -- common/autotest_common.sh@10 -- # set +x 00:18:31.221 ************************************ 00:18:31.221 END TEST nvmf_bdevio 00:18:31.221 ************************************ 00:18:31.221 00:52:23 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:18:31.221 00:52:23 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.221 00:52:23 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:18:31.221 00:52:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.222 00:52:23 -- common/autotest_common.sh@10 -- # set +x 00:18:31.482 ************************************ 00:18:31.482 START TEST nvmf_bdevio_no_huge 00:18:31.482 ************************************ 00:18:31.482 00:52:23 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:31.482 * Looking for test storage... 00:18:31.482 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:31.482 00:52:24 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.482 00:52:24 -- nvmf/common.sh@7 -- # uname -s 00:18:31.482 00:52:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.482 00:52:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.482 00:52:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.482 00:52:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.482 00:52:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.482 00:52:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.482 00:52:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.482 00:52:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.482 00:52:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.482 00:52:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.482 00:52:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:18:31.482 00:52:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:18:31.482 00:52:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.482 00:52:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.482 00:52:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:31.482 00:52:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.482 00:52:24 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:31.482 00:52:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.482 00:52:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.482 00:52:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.482 00:52:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.482 00:52:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.482 00:52:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.482 00:52:24 -- paths/export.sh@5 -- # export PATH 00:18:31.482 00:52:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.482 00:52:24 -- nvmf/common.sh@47 -- # : 0 00:18:31.482 00:52:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:31.482 00:52:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:31.482 00:52:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.482 00:52:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.482 00:52:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.482 00:52:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:31.482 00:52:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:31.482 00:52:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:31.482 00:52:24 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:31.482 00:52:24 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:31.482 00:52:24 -- target/bdevio.sh@14 -- # nvmftestinit 00:18:31.482 00:52:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:31.482 00:52:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:31.482 00:52:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:31.482 00:52:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:31.482 00:52:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:31.482 00:52:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:31.482 00:52:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:31.482 00:52:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.482 00:52:24 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:31.482 00:52:24 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:31.482 00:52:24 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:31.482 00:52:24 -- common/autotest_common.sh@10 -- # set +x 00:18:36.759 00:52:29 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:36.759 00:52:29 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:36.759 00:52:29 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:36.759 00:52:29 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:36.759 00:52:29 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:36.759 00:52:29 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:36.759 00:52:29 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:36.759 00:52:29 -- nvmf/common.sh@295 -- # net_devs=() 00:18:36.759 00:52:29 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:36.759 00:52:29 -- nvmf/common.sh@296 -- # e810=() 00:18:36.759 00:52:29 -- nvmf/common.sh@296 -- # local -ga e810 00:18:36.759 00:52:29 -- nvmf/common.sh@297 -- # x722=() 00:18:36.759 00:52:29 -- nvmf/common.sh@297 -- # local -ga x722 00:18:36.759 00:52:29 -- nvmf/common.sh@298 -- # mlx=() 00:18:36.759 00:52:29 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:36.759 00:52:29 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:36.759 00:52:29 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:36.759 00:52:29 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.759 00:52:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:36.759 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:36.759 00:52:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:36.759 00:52:29 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:36.759 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:36.759 00:52:29 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.759 00:52:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.759 00:52:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.759 00:52:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:36.759 Found net devices under 0000:27:00.0: cvl_0_0 00:18:36.759 00:52:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.759 00:52:29 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:36.759 00:52:29 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:36.759 00:52:29 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:36.759 00:52:29 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:36.759 Found net devices under 0000:27:00.1: cvl_0_1 00:18:36.759 00:52:29 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:36.759 00:52:29 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:36.759 00:52:29 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:36.759 00:52:29 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:36.759 00:52:29 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:36.759 00:52:29 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:36.759 00:52:29 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:36.759 00:52:29 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:36.759 00:52:29 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:36.759 00:52:29 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:36.759 00:52:29 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:36.759 00:52:29 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:36.759 00:52:29 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:36.759 00:52:29 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:36.759 00:52:29 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:36.759 00:52:29 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:36.759 00:52:29 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:36.759 00:52:29 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:36.759 00:52:29 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:36.759 00:52:29 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:36.759 00:52:29 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:36.759 00:52:29 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:36.759 00:52:29 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:36.759 00:52:29 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:36.759 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:36.759 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.516 ms 00:18:36.759 00:18:36.759 --- 10.0.0.2 ping statistics --- 00:18:36.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.759 rtt min/avg/max/mdev = 0.516/0.516/0.516/0.000 ms 00:18:36.759 00:52:29 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:36.759 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:36.759 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.259 ms 00:18:36.759 00:18:36.760 --- 10.0.0.1 ping statistics --- 00:18:36.760 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:36.760 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:18:36.760 00:52:29 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:36.760 00:52:29 -- nvmf/common.sh@411 -- # return 0 00:18:36.760 00:52:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:36.760 00:52:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:36.760 00:52:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:36.760 00:52:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:36.760 00:52:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:36.760 00:52:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:36.760 00:52:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:36.760 00:52:29 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:36.760 00:52:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:36.760 00:52:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:36.760 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:18:36.760 00:52:29 -- nvmf/common.sh@470 -- # nvmfpid=2778652 00:18:36.760 00:52:29 -- nvmf/common.sh@471 -- # waitforlisten 2778652 00:18:36.760 00:52:29 -- common/autotest_common.sh@817 -- # '[' -z 2778652 ']' 00:18:36.760 00:52:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:36.760 00:52:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:36.760 00:52:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:36.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:36.760 00:52:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:36.760 00:52:29 -- common/autotest_common.sh@10 -- # set +x 00:18:36.760 00:52:29 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:37.020 [2024-04-27 00:52:29.528387] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:37.020 [2024-04-27 00:52:29.528503] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:37.020 [2024-04-27 00:52:29.676726] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:37.279 [2024-04-27 00:52:29.794232] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:37.279 [2024-04-27 00:52:29.794275] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:37.279 [2024-04-27 00:52:29.794288] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:37.279 [2024-04-27 00:52:29.794300] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:37.279 [2024-04-27 00:52:29.794310] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:37.279 [2024-04-27 00:52:29.794427] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:37.279 [2024-04-27 00:52:29.794563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:18:37.279 [2024-04-27 00:52:29.794661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:37.279 [2024-04-27 00:52:29.794695] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:18:37.537 00:52:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:37.537 00:52:30 -- common/autotest_common.sh@850 -- # return 0 00:18:37.537 00:52:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:37.537 00:52:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:37.537 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 00:52:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:37.795 00:52:30 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:37.795 00:52:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.795 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 [2024-04-27 00:52:30.256707] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:37.795 00:52:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.795 00:52:30 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:37.795 00:52:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.795 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 Malloc0 00:18:37.795 00:52:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.795 00:52:30 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:37.795 00:52:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.795 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 00:52:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.795 00:52:30 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:37.795 00:52:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.795 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 00:52:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.795 00:52:30 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:37.795 00:52:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:37.795 00:52:30 -- common/autotest_common.sh@10 -- # set +x 00:18:37.795 [2024-04-27 00:52:30.310341] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:37.795 00:52:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:37.795 00:52:30 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:37.795 00:52:30 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:37.795 00:52:30 -- nvmf/common.sh@521 -- # config=() 00:18:37.795 00:52:30 -- nvmf/common.sh@521 -- # local subsystem config 00:18:37.795 00:52:30 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:18:37.795 00:52:30 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:18:37.795 { 00:18:37.795 "params": { 00:18:37.795 "name": "Nvme$subsystem", 00:18:37.795 "trtype": "$TEST_TRANSPORT", 00:18:37.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:37.795 "adrfam": "ipv4", 00:18:37.795 "trsvcid": "$NVMF_PORT", 00:18:37.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:37.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:37.795 "hdgst": ${hdgst:-false}, 00:18:37.795 "ddgst": ${ddgst:-false} 00:18:37.795 }, 00:18:37.795 "method": "bdev_nvme_attach_controller" 00:18:37.795 } 00:18:37.795 EOF 00:18:37.795 )") 00:18:37.795 00:52:30 -- nvmf/common.sh@543 -- # cat 00:18:37.795 00:52:30 -- nvmf/common.sh@545 -- # jq . 00:18:37.795 00:52:30 -- nvmf/common.sh@546 -- # IFS=, 00:18:37.795 00:52:30 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:18:37.795 "params": { 00:18:37.795 "name": "Nvme1", 00:18:37.795 "trtype": "tcp", 00:18:37.795 "traddr": "10.0.0.2", 00:18:37.795 "adrfam": "ipv4", 00:18:37.795 "trsvcid": "4420", 00:18:37.795 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.795 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.795 "hdgst": false, 00:18:37.795 "ddgst": false 00:18:37.795 }, 00:18:37.795 "method": "bdev_nvme_attach_controller" 00:18:37.795 }' 00:18:37.795 [2024-04-27 00:52:30.382249] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:37.795 [2024-04-27 00:52:30.382358] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2778969 ] 00:18:38.054 [2024-04-27 00:52:30.514495] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:38.054 [2024-04-27 00:52:30.630547] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:38.054 [2024-04-27 00:52:30.630647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.055 [2024-04-27 00:52:30.630653] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:38.314 I/O targets: 00:18:38.314 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:38.314 00:18:38.314 00:18:38.314 CUnit - A unit testing framework for C - Version 2.1-3 00:18:38.314 http://cunit.sourceforge.net/ 00:18:38.314 00:18:38.314 00:18:38.314 Suite: bdevio tests on: Nvme1n1 00:18:38.574 Test: blockdev write read block ...passed 00:18:38.574 Test: blockdev write zeroes read block ...passed 00:18:38.574 Test: blockdev write zeroes read no split ...passed 00:18:38.574 Test: blockdev write zeroes read split ...passed 00:18:38.574 Test: blockdev write zeroes read split partial ...passed 00:18:38.574 Test: blockdev reset ...[2024-04-27 00:52:31.181307] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:38.574 [2024-04-27 00:52:31.181421] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005a40 (9): Bad file descriptor 00:18:38.832 [2024-04-27 00:52:31.331358] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:38.832 passed 00:18:38.832 Test: blockdev write read 8 blocks ...passed 00:18:38.832 Test: blockdev write read size > 128k ...passed 00:18:38.832 Test: blockdev write read invalid size ...passed 00:18:38.832 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:38.832 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:38.832 Test: blockdev write read max offset ...passed 00:18:38.832 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:38.832 Test: blockdev writev readv 8 blocks ...passed 00:18:38.832 Test: blockdev writev readv 30 x 1block ...passed 00:18:38.832 Test: blockdev writev readv block ...passed 00:18:38.832 Test: blockdev writev readv size > 128k ...passed 00:18:38.832 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:38.832 Test: blockdev comparev and writev ...[2024-04-27 00:52:31.507772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.507815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.507833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.507842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:38.832 [2024-04-27 00:52:31.508701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:38.832 [2024-04-27 00:52:31.508709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:39.090 passed 00:18:39.090 Test: blockdev nvme passthru rw ...passed 00:18:39.090 Test: blockdev nvme passthru vendor specific ...[2024-04-27 00:52:31.591638] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.090 [2024-04-27 00:52:31.591663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:39.090 [2024-04-27 00:52:31.591792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.090 [2024-04-27 00:52:31.591801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:39.090 [2024-04-27 00:52:31.591928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.090 [2024-04-27 00:52:31.591937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:39.090 [2024-04-27 00:52:31.592070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:39.090 [2024-04-27 00:52:31.592078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:39.090 passed 00:18:39.090 Test: blockdev nvme admin passthru ...passed 00:18:39.090 Test: blockdev copy ...passed 00:18:39.090 00:18:39.090 Run Summary: Type Total Ran Passed Failed Inactive 00:18:39.090 suites 1 1 n/a 0 0 00:18:39.090 tests 23 23 23 0 0 00:18:39.090 asserts 152 152 152 0 n/a 00:18:39.090 00:18:39.090 Elapsed time = 1.317 seconds 00:18:39.353 00:52:31 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:39.353 00:52:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:39.353 00:52:31 -- common/autotest_common.sh@10 -- # set +x 00:18:39.353 00:52:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:39.353 00:52:31 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:39.353 00:52:31 -- target/bdevio.sh@30 -- # nvmftestfini 00:18:39.354 00:52:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:18:39.354 00:52:31 -- nvmf/common.sh@117 -- # sync 00:18:39.354 00:52:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:39.354 00:52:32 -- nvmf/common.sh@120 -- # set +e 00:18:39.354 00:52:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:39.354 00:52:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:39.354 rmmod nvme_tcp 00:18:39.354 rmmod nvme_fabrics 00:18:39.354 rmmod nvme_keyring 00:18:39.614 00:52:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:39.614 00:52:32 -- nvmf/common.sh@124 -- # set -e 00:18:39.614 00:52:32 -- nvmf/common.sh@125 -- # return 0 00:18:39.614 00:52:32 -- nvmf/common.sh@478 -- # '[' -n 2778652 ']' 00:18:39.614 00:52:32 -- nvmf/common.sh@479 -- # killprocess 2778652 00:18:39.614 00:52:32 -- common/autotest_common.sh@936 -- # '[' -z 2778652 ']' 00:18:39.614 00:52:32 -- common/autotest_common.sh@940 -- # kill -0 2778652 00:18:39.614 00:52:32 -- common/autotest_common.sh@941 -- # uname 00:18:39.614 00:52:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:39.614 00:52:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2778652 00:18:39.614 00:52:32 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:18:39.614 00:52:32 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:18:39.614 00:52:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2778652' 00:18:39.614 killing process with pid 2778652 00:18:39.614 00:52:32 -- common/autotest_common.sh@955 -- # kill 2778652 00:18:39.614 00:52:32 -- common/autotest_common.sh@960 -- # wait 2778652 00:18:39.873 00:52:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:18:39.873 00:52:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:18:39.873 00:52:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:18:39.873 00:52:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:39.873 00:52:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:39.873 00:52:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:39.873 00:52:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:39.873 00:52:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.415 00:52:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:42.415 00:18:42.415 real 0m10.554s 00:18:42.415 user 0m15.087s 00:18:42.415 sys 0m4.854s 00:18:42.415 00:52:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:42.415 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:18:42.415 ************************************ 00:18:42.415 END TEST nvmf_bdevio_no_huge 00:18:42.415 ************************************ 00:18:42.415 00:52:34 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.415 00:52:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:42.415 00:52:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.415 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:18:42.415 ************************************ 00:18:42.415 START TEST nvmf_tls 00:18:42.415 ************************************ 00:18:42.415 00:52:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:42.415 * Looking for test storage... 00:18:42.415 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:18:42.415 00:52:34 -- target/tls.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:18:42.415 00:52:34 -- nvmf/common.sh@7 -- # uname -s 00:18:42.415 00:52:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:42.415 00:52:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:42.415 00:52:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:42.415 00:52:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:42.415 00:52:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:42.415 00:52:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:42.415 00:52:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:42.415 00:52:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:42.415 00:52:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:42.415 00:52:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:42.415 00:52:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:18:42.415 00:52:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:18:42.415 00:52:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:42.415 00:52:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:42.415 00:52:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:18:42.415 00:52:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:42.415 00:52:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:18:42.415 00:52:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:42.415 00:52:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:42.415 00:52:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:42.415 00:52:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.415 00:52:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.415 00:52:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.415 00:52:34 -- paths/export.sh@5 -- # export PATH 00:18:42.416 00:52:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:42.416 00:52:34 -- nvmf/common.sh@47 -- # : 0 00:18:42.416 00:52:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:42.416 00:52:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:42.416 00:52:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:42.416 00:52:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:42.416 00:52:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:42.416 00:52:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:42.416 00:52:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:42.416 00:52:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:42.416 00:52:34 -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:18:42.416 00:52:34 -- target/tls.sh@62 -- # nvmftestinit 00:18:42.416 00:52:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:18:42.416 00:52:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:42.416 00:52:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:18:42.416 00:52:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:18:42.416 00:52:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:18:42.416 00:52:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:42.416 00:52:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:42.416 00:52:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:42.416 00:52:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:18:42.416 00:52:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:18:42.416 00:52:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:18:42.416 00:52:34 -- common/autotest_common.sh@10 -- # set +x 00:18:47.690 00:52:39 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:47.690 00:52:39 -- nvmf/common.sh@291 -- # pci_devs=() 00:18:47.690 00:52:39 -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:47.690 00:52:39 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:47.690 00:52:39 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:47.690 00:52:39 -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:47.690 00:52:39 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:47.690 00:52:39 -- nvmf/common.sh@295 -- # net_devs=() 00:18:47.690 00:52:39 -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:47.690 00:52:39 -- nvmf/common.sh@296 -- # e810=() 00:18:47.690 00:52:39 -- nvmf/common.sh@296 -- # local -ga e810 00:18:47.690 00:52:39 -- nvmf/common.sh@297 -- # x722=() 00:18:47.690 00:52:39 -- nvmf/common.sh@297 -- # local -ga x722 00:18:47.690 00:52:39 -- nvmf/common.sh@298 -- # mlx=() 00:18:47.690 00:52:39 -- nvmf/common.sh@298 -- # local -ga mlx 00:18:47.690 00:52:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:47.690 00:52:39 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:47.690 00:52:39 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.690 00:52:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:18:47.690 Found 0000:27:00.0 (0x8086 - 0x159b) 00:18:47.690 00:52:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:47.690 00:52:39 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:18:47.690 Found 0000:27:00.1 (0x8086 - 0x159b) 00:18:47.690 00:52:39 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.690 00:52:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.690 00:52:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.690 00:52:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:18:47.690 Found net devices under 0000:27:00.0: cvl_0_0 00:18:47.690 00:52:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.690 00:52:39 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:47.690 00:52:39 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:47.690 00:52:39 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:47.690 00:52:39 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:18:47.690 Found net devices under 0000:27:00.1: cvl_0_1 00:18:47.690 00:52:39 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:18:47.690 00:52:39 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@403 -- # is_hw=yes 00:18:47.690 00:52:39 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:18:47.690 00:52:39 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:18:47.690 00:52:39 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:47.690 00:52:39 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:47.690 00:52:39 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:47.690 00:52:39 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:47.690 00:52:39 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:47.690 00:52:39 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:47.690 00:52:39 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:47.690 00:52:39 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:47.690 00:52:39 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:47.690 00:52:39 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:47.690 00:52:39 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:47.690 00:52:39 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:47.690 00:52:39 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:47.690 00:52:40 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:47.690 00:52:40 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:47.690 00:52:40 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:47.690 00:52:40 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:47.690 00:52:40 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:47.690 00:52:40 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:47.690 00:52:40 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:47.690 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:47.690 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:18:47.690 00:18:47.690 --- 10.0.0.2 ping statistics --- 00:18:47.690 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.690 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:18:47.690 00:52:40 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:47.690 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:47.690 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:18:47.691 00:18:47.691 --- 10.0.0.1 ping statistics --- 00:18:47.691 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:47.691 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:18:47.691 00:52:40 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:47.691 00:52:40 -- nvmf/common.sh@411 -- # return 0 00:18:47.691 00:52:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:18:47.691 00:52:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:47.691 00:52:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:18:47.691 00:52:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:18:47.691 00:52:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:47.691 00:52:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:18:47.691 00:52:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:18:47.691 00:52:40 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:47.691 00:52:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:18:47.691 00:52:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:18:47.691 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:18:47.691 00:52:40 -- nvmf/common.sh@470 -- # nvmfpid=2783171 00:18:47.691 00:52:40 -- nvmf/common.sh@471 -- # waitforlisten 2783171 00:18:47.691 00:52:40 -- common/autotest_common.sh@817 -- # '[' -z 2783171 ']' 00:18:47.691 00:52:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:47.691 00:52:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:47.691 00:52:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:47.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:47.691 00:52:40 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:47.691 00:52:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:47.691 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:18:47.691 [2024-04-27 00:52:40.264459] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:18:47.691 [2024-04-27 00:52:40.264561] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:47.691 EAL: No free 2048 kB hugepages reported on node 1 00:18:47.951 [2024-04-27 00:52:40.412409] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:47.951 [2024-04-27 00:52:40.565353] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:47.951 [2024-04-27 00:52:40.565400] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:47.951 [2024-04-27 00:52:40.565417] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:47.951 [2024-04-27 00:52:40.565432] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:47.951 [2024-04-27 00:52:40.565444] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:47.951 [2024-04-27 00:52:40.565495] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:48.518 00:52:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:48.518 00:52:40 -- common/autotest_common.sh@850 -- # return 0 00:18:48.518 00:52:40 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:18:48.518 00:52:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:18:48.518 00:52:40 -- common/autotest_common.sh@10 -- # set +x 00:18:48.518 00:52:40 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:48.518 00:52:40 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:48.518 00:52:40 -- target/tls.sh@70 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:48.518 true 00:18:48.518 00:52:41 -- target/tls.sh@73 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.518 00:52:41 -- target/tls.sh@73 -- # jq -r .tls_version 00:18:48.777 00:52:41 -- target/tls.sh@73 -- # version=0 00:18:48.777 00:52:41 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:48.777 00:52:41 -- target/tls.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:48.777 00:52:41 -- target/tls.sh@81 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:48.777 00:52:41 -- target/tls.sh@81 -- # jq -r .tls_version 00:18:49.038 00:52:41 -- target/tls.sh@81 -- # version=13 00:18:49.038 00:52:41 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:49.038 00:52:41 -- target/tls.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:49.038 00:52:41 -- target/tls.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.038 00:52:41 -- target/tls.sh@89 -- # jq -r .tls_version 00:18:49.299 00:52:41 -- target/tls.sh@89 -- # version=7 00:18:49.299 00:52:41 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:49.299 00:52:41 -- target/tls.sh@96 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.299 00:52:41 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:49.299 00:52:41 -- target/tls.sh@96 -- # ktls=false 00:18:49.299 00:52:41 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:49.299 00:52:41 -- target/tls.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:49.558 00:52:42 -- target/tls.sh@104 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.558 00:52:42 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:49.558 00:52:42 -- target/tls.sh@104 -- # ktls=true 00:18:49.558 00:52:42 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:49.558 00:52:42 -- target/tls.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:49.817 00:52:42 -- target/tls.sh@112 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:49.817 00:52:42 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:49.817 00:52:42 -- target/tls.sh@112 -- # ktls=false 00:18:49.817 00:52:42 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:49.817 00:52:42 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:49.817 00:52:42 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:49.817 00:52:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # digest=1 00:18:49.817 00:52:42 -- nvmf/common.sh@694 -- # python - 00:18:49.817 00:52:42 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:49.817 00:52:42 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:49.817 00:52:42 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:49.817 00:52:42 -- nvmf/common.sh@691 -- # local prefix key digest 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:18:49.817 00:52:42 -- nvmf/common.sh@693 -- # digest=1 00:18:49.817 00:52:42 -- nvmf/common.sh@694 -- # python - 00:18:49.817 00:52:42 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:49.817 00:52:42 -- target/tls.sh@121 -- # mktemp 00:18:49.817 00:52:42 -- target/tls.sh@121 -- # key_path=/tmp/tmp.epv6zyfCRr 00:18:49.817 00:52:42 -- target/tls.sh@122 -- # mktemp 00:18:49.817 00:52:42 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.BEoxeTKi2E 00:18:49.817 00:52:42 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:49.817 00:52:42 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:49.817 00:52:42 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.epv6zyfCRr 00:18:49.817 00:52:42 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.BEoxeTKi2E 00:18:49.817 00:52:42 -- target/tls.sh@130 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:50.075 00:52:42 -- target/tls.sh@131 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:50.333 00:52:42 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.epv6zyfCRr 00:18:50.333 00:52:42 -- target/tls.sh@49 -- # local key=/tmp/tmp.epv6zyfCRr 00:18:50.333 00:52:42 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:50.591 [2024-04-27 00:52:43.045325] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:50.592 00:52:43 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:50.592 00:52:43 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:50.851 [2024-04-27 00:52:43.293369] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:50.851 [2024-04-27 00:52:43.293586] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:50.851 00:52:43 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:50.851 malloc0 00:18:50.851 00:52:43 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:51.111 00:52:43 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.epv6zyfCRr 00:18:51.111 [2024-04-27 00:52:43.711940] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:51.111 00:52:43 -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.epv6zyfCRr 00:18:51.111 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.406 Initializing NVMe Controllers 00:19:03.406 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:03.406 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:03.406 Initialization complete. Launching workers. 00:19:03.406 ======================================================== 00:19:03.406 Latency(us) 00:19:03.406 Device Information : IOPS MiB/s Average min max 00:19:03.406 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16868.39 65.89 3794.42 1124.09 5560.97 00:19:03.406 ======================================================== 00:19:03.406 Total : 16868.39 65.89 3794.42 1124.09 5560.97 00:19:03.406 00:19:03.406 00:52:53 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.epv6zyfCRr 00:19:03.406 00:52:53 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:03.406 00:52:53 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:03.406 00:52:53 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:03.406 00:52:53 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.epv6zyfCRr' 00:19:03.406 00:52:53 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:03.406 00:52:53 -- target/tls.sh@28 -- # bdevperf_pid=2785909 00:19:03.406 00:52:53 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:03.406 00:52:53 -- target/tls.sh@31 -- # waitforlisten 2785909 /var/tmp/bdevperf.sock 00:19:03.406 00:52:53 -- common/autotest_common.sh@817 -- # '[' -z 2785909 ']' 00:19:03.406 00:52:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:03.406 00:52:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.406 00:52:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:03.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:03.406 00:52:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.406 00:52:53 -- common/autotest_common.sh@10 -- # set +x 00:19:03.406 00:52:53 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:03.406 [2024-04-27 00:52:53.967385] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:03.406 [2024-04-27 00:52:53.967507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2785909 ] 00:19:03.406 EAL: No free 2048 kB hugepages reported on node 1 00:19:03.406 [2024-04-27 00:52:54.051960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:03.406 [2024-04-27 00:52:54.141030] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.406 00:52:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:03.406 00:52:54 -- common/autotest_common.sh@850 -- # return 0 00:19:03.406 00:52:54 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.epv6zyfCRr 00:19:03.406 [2024-04-27 00:52:54.815275] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:03.406 [2024-04-27 00:52:54.815392] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:03.406 TLSTESTn1 00:19:03.406 00:52:54 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:03.406 Running I/O for 10 seconds... 00:19:13.402 00:19:13.402 Latency(us) 00:19:13.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.402 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:13.402 Verification LBA range: start 0x0 length 0x2000 00:19:13.402 TLSTESTn1 : 10.01 5346.71 20.89 0.00 0.00 23906.42 6174.18 66777.73 00:19:13.402 =================================================================================================================== 00:19:13.402 Total : 5346.71 20.89 0.00 0.00 23906.42 6174.18 66777.73 00:19:13.402 0 00:19:13.402 00:53:04 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:13.402 00:53:04 -- target/tls.sh@45 -- # killprocess 2785909 00:19:13.402 00:53:04 -- common/autotest_common.sh@936 -- # '[' -z 2785909 ']' 00:19:13.402 00:53:04 -- common/autotest_common.sh@940 -- # kill -0 2785909 00:19:13.402 00:53:04 -- common/autotest_common.sh@941 -- # uname 00:19:13.402 00:53:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.402 00:53:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2785909 00:19:13.402 00:53:05 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:13.402 00:53:05 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:13.402 00:53:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2785909' 00:19:13.402 killing process with pid 2785909 00:19:13.402 00:53:05 -- common/autotest_common.sh@955 -- # kill 2785909 00:19:13.402 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.402 00:19:13.402 Latency(us) 00:19:13.402 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.402 =================================================================================================================== 00:19:13.402 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:13.402 [2024-04-27 00:53:05.043306] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:13.402 00:53:05 -- common/autotest_common.sh@960 -- # wait 2785909 00:19:13.402 00:53:05 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEoxeTKi2E 00:19:13.402 00:53:05 -- common/autotest_common.sh@638 -- # local es=0 00:19:13.402 00:53:05 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEoxeTKi2E 00:19:13.402 00:53:05 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:13.402 00:53:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.402 00:53:05 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:13.402 00:53:05 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:13.402 00:53:05 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.BEoxeTKi2E 00:19:13.402 00:53:05 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.402 00:53:05 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.402 00:53:05 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.402 00:53:05 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.BEoxeTKi2E' 00:19:13.402 00:53:05 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.402 00:53:05 -- target/tls.sh@28 -- # bdevperf_pid=2788148 00:19:13.402 00:53:05 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.402 00:53:05 -- target/tls.sh@31 -- # waitforlisten 2788148 /var/tmp/bdevperf.sock 00:19:13.402 00:53:05 -- common/autotest_common.sh@817 -- # '[' -z 2788148 ']' 00:19:13.402 00:53:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.402 00:53:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:13.402 00:53:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.402 00:53:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:13.402 00:53:05 -- common/autotest_common.sh@10 -- # set +x 00:19:13.402 00:53:05 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.402 [2024-04-27 00:53:05.512898] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:13.402 [2024-04-27 00:53:05.513032] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788148 ] 00:19:13.402 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.402 [2024-04-27 00:53:05.632736] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.402 [2024-04-27 00:53:05.722771] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.661 00:53:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:13.661 00:53:06 -- common/autotest_common.sh@850 -- # return 0 00:19:13.661 00:53:06 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.BEoxeTKi2E 00:19:13.661 [2024-04-27 00:53:06.322251] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.661 [2024-04-27 00:53:06.322372] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:13.661 [2024-04-27 00:53:06.335730] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.661 [2024-04-27 00:53:06.335947] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:13.661 [2024-04-27 00:53:06.336925] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:13.661 [2024-04-27 00:53:06.337919] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:13.661 [2024-04-27 00:53:06.337935] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.661 [2024-04-27 00:53:06.337947] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:13.661 request: 00:19:13.661 { 00:19:13.661 "name": "TLSTEST", 00:19:13.661 "trtype": "tcp", 00:19:13.661 "traddr": "10.0.0.2", 00:19:13.661 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.661 "adrfam": "ipv4", 00:19:13.661 "trsvcid": "4420", 00:19:13.661 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.661 "psk": "/tmp/tmp.BEoxeTKi2E", 00:19:13.661 "method": "bdev_nvme_attach_controller", 00:19:13.661 "req_id": 1 00:19:13.661 } 00:19:13.661 Got JSON-RPC error response 00:19:13.661 response: 00:19:13.661 { 00:19:13.661 "code": -32602, 00:19:13.661 "message": "Invalid parameters" 00:19:13.661 } 00:19:13.661 00:53:06 -- target/tls.sh@36 -- # killprocess 2788148 00:19:13.661 00:53:06 -- common/autotest_common.sh@936 -- # '[' -z 2788148 ']' 00:19:13.661 00:53:06 -- common/autotest_common.sh@940 -- # kill -0 2788148 00:19:13.661 00:53:06 -- common/autotest_common.sh@941 -- # uname 00:19:13.661 00:53:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:13.919 00:53:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2788148 00:19:13.919 00:53:06 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:13.919 00:53:06 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:13.919 00:53:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2788148' 00:19:13.919 killing process with pid 2788148 00:19:13.919 00:53:06 -- common/autotest_common.sh@955 -- # kill 2788148 00:19:13.919 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.919 00:19:13.919 Latency(us) 00:19:13.919 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.919 =================================================================================================================== 00:19:13.919 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.919 [2024-04-27 00:53:06.391684] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:13.919 00:53:06 -- common/autotest_common.sh@960 -- # wait 2788148 00:19:14.179 00:53:06 -- target/tls.sh@37 -- # return 1 00:19:14.179 00:53:06 -- common/autotest_common.sh@641 -- # es=1 00:19:14.179 00:53:06 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:14.179 00:53:06 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:14.179 00:53:06 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:14.179 00:53:06 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.epv6zyfCRr 00:19:14.179 00:53:06 -- common/autotest_common.sh@638 -- # local es=0 00:19:14.179 00:53:06 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.epv6zyfCRr 00:19:14.179 00:53:06 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:14.179 00:53:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:14.179 00:53:06 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:14.179 00:53:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:14.179 00:53:06 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.epv6zyfCRr 00:19:14.179 00:53:06 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:14.179 00:53:06 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:14.179 00:53:06 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:14.179 00:53:06 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.epv6zyfCRr' 00:19:14.179 00:53:06 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:14.179 00:53:06 -- target/tls.sh@28 -- # bdevperf_pid=2788333 00:19:14.179 00:53:06 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:14.179 00:53:06 -- target/tls.sh@31 -- # waitforlisten 2788333 /var/tmp/bdevperf.sock 00:19:14.179 00:53:06 -- common/autotest_common.sh@817 -- # '[' -z 2788333 ']' 00:19:14.179 00:53:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:14.179 00:53:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:14.179 00:53:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:14.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:14.179 00:53:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:14.179 00:53:06 -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 00:53:06 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:14.179 [2024-04-27 00:53:06.828246] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:14.179 [2024-04-27 00:53:06.828391] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788333 ] 00:19:14.439 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.439 [2024-04-27 00:53:06.961213] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.439 [2024-04-27 00:53:07.057853] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:15.010 00:53:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:15.010 00:53:07 -- common/autotest_common.sh@850 -- # return 0 00:19:15.010 00:53:07 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.epv6zyfCRr 00:19:15.010 [2024-04-27 00:53:07.683556] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:15.010 [2024-04-27 00:53:07.683695] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:15.010 [2024-04-27 00:53:07.695723] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.010 [2024-04-27 00:53:07.695751] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:15.010 [2024-04-27 00:53:07.695789] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:15.010 [2024-04-27 00:53:07.696389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:15.010 [2024-04-27 00:53:07.697363] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:15.010 [2024-04-27 00:53:07.698364] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:15.010 [2024-04-27 00:53:07.698380] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:15.010 [2024-04-27 00:53:07.698394] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:15.010 request: 00:19:15.010 { 00:19:15.010 "name": "TLSTEST", 00:19:15.010 "trtype": "tcp", 00:19:15.010 "traddr": "10.0.0.2", 00:19:15.010 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:15.010 "adrfam": "ipv4", 00:19:15.010 "trsvcid": "4420", 00:19:15.010 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:15.010 "psk": "/tmp/tmp.epv6zyfCRr", 00:19:15.010 "method": "bdev_nvme_attach_controller", 00:19:15.010 "req_id": 1 00:19:15.010 } 00:19:15.010 Got JSON-RPC error response 00:19:15.010 response: 00:19:15.010 { 00:19:15.010 "code": -32602, 00:19:15.010 "message": "Invalid parameters" 00:19:15.010 } 00:19:15.269 00:53:07 -- target/tls.sh@36 -- # killprocess 2788333 00:19:15.269 00:53:07 -- common/autotest_common.sh@936 -- # '[' -z 2788333 ']' 00:19:15.269 00:53:07 -- common/autotest_common.sh@940 -- # kill -0 2788333 00:19:15.269 00:53:07 -- common/autotest_common.sh@941 -- # uname 00:19:15.269 00:53:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:15.269 00:53:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2788333 00:19:15.269 00:53:07 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:15.269 00:53:07 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:15.269 00:53:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2788333' 00:19:15.269 killing process with pid 2788333 00:19:15.269 00:53:07 -- common/autotest_common.sh@955 -- # kill 2788333 00:19:15.269 Received shutdown signal, test time was about 10.000000 seconds 00:19:15.269 00:19:15.269 Latency(us) 00:19:15.269 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.269 =================================================================================================================== 00:19:15.269 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:15.269 [2024-04-27 00:53:07.759589] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:15.269 00:53:07 -- common/autotest_common.sh@960 -- # wait 2788333 00:19:15.527 00:53:08 -- target/tls.sh@37 -- # return 1 00:19:15.527 00:53:08 -- common/autotest_common.sh@641 -- # es=1 00:19:15.527 00:53:08 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:15.527 00:53:08 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:15.527 00:53:08 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:15.527 00:53:08 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.epv6zyfCRr 00:19:15.527 00:53:08 -- common/autotest_common.sh@638 -- # local es=0 00:19:15.527 00:53:08 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.epv6zyfCRr 00:19:15.527 00:53:08 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:15.527 00:53:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.527 00:53:08 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:15.527 00:53:08 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:15.527 00:53:08 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.epv6zyfCRr 00:19:15.527 00:53:08 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:15.527 00:53:08 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:15.527 00:53:08 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:15.527 00:53:08 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.epv6zyfCRr' 00:19:15.527 00:53:08 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:15.527 00:53:08 -- target/tls.sh@28 -- # bdevperf_pid=2788620 00:19:15.527 00:53:08 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:15.527 00:53:08 -- target/tls.sh@31 -- # waitforlisten 2788620 /var/tmp/bdevperf.sock 00:19:15.527 00:53:08 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:15.527 00:53:08 -- common/autotest_common.sh@817 -- # '[' -z 2788620 ']' 00:19:15.527 00:53:08 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:15.527 00:53:08 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:15.527 00:53:08 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:15.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:15.527 00:53:08 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:15.527 00:53:08 -- common/autotest_common.sh@10 -- # set +x 00:19:15.527 [2024-04-27 00:53:08.180517] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:15.528 [2024-04-27 00:53:08.180629] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788620 ] 00:19:15.786 EAL: No free 2048 kB hugepages reported on node 1 00:19:15.786 [2024-04-27 00:53:08.276322] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:15.786 [2024-04-27 00:53:08.372549] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:16.353 00:53:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:16.353 00:53:08 -- common/autotest_common.sh@850 -- # return 0 00:19:16.353 00:53:08 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.epv6zyfCRr 00:19:16.353 [2024-04-27 00:53:09.010078] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:16.353 [2024-04-27 00:53:09.010203] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:16.353 [2024-04-27 00:53:09.017624] tcp.c: 878:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.353 [2024-04-27 00:53:09.017658] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:16.353 [2024-04-27 00:53:09.017696] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:16.353 [2024-04-27 00:53:09.018039] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (107): Transport endpoint is not connected 00:19:16.353 [2024-04-27 00:53:09.019017] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:16.353 [2024-04-27 00:53:09.020009] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:16.353 [2024-04-27 00:53:09.020025] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:16.353 [2024-04-27 00:53:09.020037] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:16.353 request: 00:19:16.353 { 00:19:16.353 "name": "TLSTEST", 00:19:16.353 "trtype": "tcp", 00:19:16.353 "traddr": "10.0.0.2", 00:19:16.353 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:16.353 "adrfam": "ipv4", 00:19:16.353 "trsvcid": "4420", 00:19:16.353 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:16.353 "psk": "/tmp/tmp.epv6zyfCRr", 00:19:16.353 "method": "bdev_nvme_attach_controller", 00:19:16.353 "req_id": 1 00:19:16.353 } 00:19:16.353 Got JSON-RPC error response 00:19:16.353 response: 00:19:16.353 { 00:19:16.353 "code": -32602, 00:19:16.353 "message": "Invalid parameters" 00:19:16.353 } 00:19:16.353 00:53:09 -- target/tls.sh@36 -- # killprocess 2788620 00:19:16.353 00:53:09 -- common/autotest_common.sh@936 -- # '[' -z 2788620 ']' 00:19:16.353 00:53:09 -- common/autotest_common.sh@940 -- # kill -0 2788620 00:19:16.353 00:53:09 -- common/autotest_common.sh@941 -- # uname 00:19:16.353 00:53:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.353 00:53:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2788620 00:19:16.613 00:53:09 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:16.613 00:53:09 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:16.613 00:53:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2788620' 00:19:16.613 killing process with pid 2788620 00:19:16.613 00:53:09 -- common/autotest_common.sh@955 -- # kill 2788620 00:19:16.613 Received shutdown signal, test time was about 10.000000 seconds 00:19:16.613 00:19:16.613 Latency(us) 00:19:16.613 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:16.613 =================================================================================================================== 00:19:16.613 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:16.613 [2024-04-27 00:53:09.079785] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:16.613 00:53:09 -- common/autotest_common.sh@960 -- # wait 2788620 00:19:16.874 00:53:09 -- target/tls.sh@37 -- # return 1 00:19:16.874 00:53:09 -- common/autotest_common.sh@641 -- # es=1 00:19:16.874 00:53:09 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:16.874 00:53:09 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:16.874 00:53:09 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:16.874 00:53:09 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.874 00:53:09 -- common/autotest_common.sh@638 -- # local es=0 00:19:16.874 00:53:09 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.874 00:53:09 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:16.874 00:53:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.874 00:53:09 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:16.874 00:53:09 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:16.874 00:53:09 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:16.874 00:53:09 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.874 00:53:09 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.874 00:53:09 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.874 00:53:09 -- target/tls.sh@23 -- # psk= 00:19:16.874 00:53:09 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.874 00:53:09 -- target/tls.sh@28 -- # bdevperf_pid=2788927 00:19:16.874 00:53:09 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.874 00:53:09 -- target/tls.sh@31 -- # waitforlisten 2788927 /var/tmp/bdevperf.sock 00:19:16.874 00:53:09 -- common/autotest_common.sh@817 -- # '[' -z 2788927 ']' 00:19:16.874 00:53:09 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.874 00:53:09 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:16.874 00:53:09 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.874 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.874 00:53:09 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:16.874 00:53:09 -- common/autotest_common.sh@10 -- # set +x 00:19:16.874 00:53:09 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.874 [2024-04-27 00:53:09.530796] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:16.874 [2024-04-27 00:53:09.530943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2788927 ] 00:19:17.134 EAL: No free 2048 kB hugepages reported on node 1 00:19:17.135 [2024-04-27 00:53:09.659137] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.135 [2024-04-27 00:53:09.749116] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.704 00:53:10 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:17.704 00:53:10 -- common/autotest_common.sh@850 -- # return 0 00:19:17.704 00:53:10 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:17.965 [2024-04-27 00:53:10.406144] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:17.965 [2024-04-27 00:53:10.408104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004c40 (9): Bad file descriptor 00:19:17.965 [2024-04-27 00:53:10.409097] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:17.965 [2024-04-27 00:53:10.409115] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:17.965 [2024-04-27 00:53:10.409130] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:17.965 request: 00:19:17.965 { 00:19:17.965 "name": "TLSTEST", 00:19:17.965 "trtype": "tcp", 00:19:17.965 "traddr": "10.0.0.2", 00:19:17.965 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:17.965 "adrfam": "ipv4", 00:19:17.965 "trsvcid": "4420", 00:19:17.965 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:17.965 "method": "bdev_nvme_attach_controller", 00:19:17.965 "req_id": 1 00:19:17.965 } 00:19:17.965 Got JSON-RPC error response 00:19:17.965 response: 00:19:17.965 { 00:19:17.965 "code": -32602, 00:19:17.965 "message": "Invalid parameters" 00:19:17.965 } 00:19:17.965 00:53:10 -- target/tls.sh@36 -- # killprocess 2788927 00:19:17.965 00:53:10 -- common/autotest_common.sh@936 -- # '[' -z 2788927 ']' 00:19:17.965 00:53:10 -- common/autotest_common.sh@940 -- # kill -0 2788927 00:19:17.965 00:53:10 -- common/autotest_common.sh@941 -- # uname 00:19:17.965 00:53:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:17.965 00:53:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2788927 00:19:17.965 00:53:10 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:17.965 00:53:10 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:17.965 00:53:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2788927' 00:19:17.965 killing process with pid 2788927 00:19:17.965 00:53:10 -- common/autotest_common.sh@955 -- # kill 2788927 00:19:17.965 Received shutdown signal, test time was about 10.000000 seconds 00:19:17.965 00:19:17.965 Latency(us) 00:19:17.965 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.965 =================================================================================================================== 00:19:17.965 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:17.965 00:53:10 -- common/autotest_common.sh@960 -- # wait 2788927 00:19:18.226 00:53:10 -- target/tls.sh@37 -- # return 1 00:19:18.226 00:53:10 -- common/autotest_common.sh@641 -- # es=1 00:19:18.226 00:53:10 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:18.226 00:53:10 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:18.226 00:53:10 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:18.226 00:53:10 -- target/tls.sh@158 -- # killprocess 2783171 00:19:18.226 00:53:10 -- common/autotest_common.sh@936 -- # '[' -z 2783171 ']' 00:19:18.226 00:53:10 -- common/autotest_common.sh@940 -- # kill -0 2783171 00:19:18.226 00:53:10 -- common/autotest_common.sh@941 -- # uname 00:19:18.226 00:53:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:18.226 00:53:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2783171 00:19:18.226 00:53:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:18.226 00:53:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:18.226 00:53:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2783171' 00:19:18.226 killing process with pid 2783171 00:19:18.226 00:53:10 -- common/autotest_common.sh@955 -- # kill 2783171 00:19:18.226 [2024-04-27 00:53:10.884054] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:18.226 00:53:10 -- common/autotest_common.sh@960 -- # wait 2783171 00:19:18.795 00:53:11 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.795 00:53:11 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:18.795 00:53:11 -- nvmf/common.sh@691 -- # local prefix key digest 00:19:18.795 00:53:11 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:19:18.795 00:53:11 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:18.795 00:53:11 -- nvmf/common.sh@693 -- # digest=2 00:19:18.795 00:53:11 -- nvmf/common.sh@694 -- # python - 00:19:18.795 00:53:11 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.054 00:53:11 -- target/tls.sh@160 -- # mktemp 00:19:19.054 00:53:11 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.aACpVS6BpP 00:19:19.054 00:53:11 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:19.054 00:53:11 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.aACpVS6BpP 00:19:19.054 00:53:11 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:19.054 00:53:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:19.054 00:53:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:19.054 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.054 00:53:11 -- nvmf/common.sh@470 -- # nvmfpid=2789318 00:19:19.054 00:53:11 -- nvmf/common.sh@471 -- # waitforlisten 2789318 00:19:19.054 00:53:11 -- common/autotest_common.sh@817 -- # '[' -z 2789318 ']' 00:19:19.054 00:53:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.054 00:53:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:19.054 00:53:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.054 00:53:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:19.054 00:53:11 -- common/autotest_common.sh@10 -- # set +x 00:19:19.054 00:53:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:19.054 [2024-04-27 00:53:11.599567] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:19.054 [2024-04-27 00:53:11.599704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:19.054 EAL: No free 2048 kB hugepages reported on node 1 00:19:19.054 [2024-04-27 00:53:11.740421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:19.315 [2024-04-27 00:53:11.832926] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:19.315 [2024-04-27 00:53:11.832981] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:19.315 [2024-04-27 00:53:11.832992] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:19.315 [2024-04-27 00:53:11.833003] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:19.315 [2024-04-27 00:53:11.833011] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:19.315 [2024-04-27 00:53:11.833052] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.882 00:53:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:19.882 00:53:12 -- common/autotest_common.sh@850 -- # return 0 00:19:19.882 00:53:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:19.882 00:53:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:19.882 00:53:12 -- common/autotest_common.sh@10 -- # set +x 00:19:19.882 00:53:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.882 00:53:12 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:19.882 00:53:12 -- target/tls.sh@49 -- # local key=/tmp/tmp.aACpVS6BpP 00:19:19.882 00:53:12 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:19.882 [2024-04-27 00:53:12.527072] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:19.882 00:53:12 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:20.141 00:53:12 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:20.141 [2024-04-27 00:53:12.811116] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:20.141 [2024-04-27 00:53:12.811405] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:20.141 00:53:12 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:20.399 malloc0 00:19:20.399 00:53:12 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:20.658 00:53:13 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:20.658 [2024-04-27 00:53:13.261906] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:20.658 00:53:13 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aACpVS6BpP 00:19:20.658 00:53:13 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:20.658 00:53:13 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:20.658 00:53:13 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:20.658 00:53:13 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aACpVS6BpP' 00:19:20.658 00:53:13 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:20.658 00:53:13 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:20.658 00:53:13 -- target/tls.sh@28 -- # bdevperf_pid=2789802 00:19:20.658 00:53:13 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:20.658 00:53:13 -- target/tls.sh@31 -- # waitforlisten 2789802 /var/tmp/bdevperf.sock 00:19:20.658 00:53:13 -- common/autotest_common.sh@817 -- # '[' -z 2789802 ']' 00:19:20.658 00:53:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:20.658 00:53:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:20.658 00:53:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:20.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:20.658 00:53:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:20.658 00:53:13 -- common/autotest_common.sh@10 -- # set +x 00:19:20.658 [2024-04-27 00:53:13.328633] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:20.658 [2024-04-27 00:53:13.328717] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2789802 ] 00:19:20.916 EAL: No free 2048 kB hugepages reported on node 1 00:19:20.916 [2024-04-27 00:53:13.418516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:20.916 [2024-04-27 00:53:13.513360] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:21.483 00:53:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:21.483 00:53:14 -- common/autotest_common.sh@850 -- # return 0 00:19:21.483 00:53:14 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:21.483 [2024-04-27 00:53:14.167658] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:21.483 [2024-04-27 00:53:14.167774] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:21.742 TLSTESTn1 00:19:21.742 00:53:14 -- target/tls.sh@41 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:21.742 Running I/O for 10 seconds... 00:19:31.724 00:19:31.724 Latency(us) 00:19:31.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.724 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:31.724 Verification LBA range: start 0x0 length 0x2000 00:19:31.724 TLSTESTn1 : 10.02 5478.14 21.40 0.00 0.00 23332.58 4932.45 44702.45 00:19:31.724 =================================================================================================================== 00:19:31.724 Total : 5478.14 21.40 0.00 0.00 23332.58 4932.45 44702.45 00:19:31.724 0 00:19:31.724 00:53:24 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:31.724 00:53:24 -- target/tls.sh@45 -- # killprocess 2789802 00:19:31.724 00:53:24 -- common/autotest_common.sh@936 -- # '[' -z 2789802 ']' 00:19:31.724 00:53:24 -- common/autotest_common.sh@940 -- # kill -0 2789802 00:19:31.724 00:53:24 -- common/autotest_common.sh@941 -- # uname 00:19:31.724 00:53:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:31.724 00:53:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2789802 00:19:31.724 00:53:24 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:31.724 00:53:24 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:31.724 00:53:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2789802' 00:19:31.724 killing process with pid 2789802 00:19:31.724 00:53:24 -- common/autotest_common.sh@955 -- # kill 2789802 00:19:31.724 Received shutdown signal, test time was about 10.000000 seconds 00:19:31.724 00:19:31.724 Latency(us) 00:19:31.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.724 =================================================================================================================== 00:19:31.724 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.724 [2024-04-27 00:53:24.397051] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:31.724 00:53:24 -- common/autotest_common.sh@960 -- # wait 2789802 00:19:32.290 00:53:24 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.aACpVS6BpP 00:19:32.290 00:53:24 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aACpVS6BpP 00:19:32.290 00:53:24 -- common/autotest_common.sh@638 -- # local es=0 00:19:32.290 00:53:24 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aACpVS6BpP 00:19:32.290 00:53:24 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:19:32.290 00:53:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:32.290 00:53:24 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:19:32.290 00:53:24 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:32.290 00:53:24 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.aACpVS6BpP 00:19:32.290 00:53:24 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:32.290 00:53:24 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:32.290 00:53:24 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:32.290 00:53:24 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.aACpVS6BpP' 00:19:32.290 00:53:24 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:32.290 00:53:24 -- target/tls.sh@28 -- # bdevperf_pid=2791960 00:19:32.290 00:53:24 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:32.290 00:53:24 -- target/tls.sh@31 -- # waitforlisten 2791960 /var/tmp/bdevperf.sock 00:19:32.290 00:53:24 -- common/autotest_common.sh@817 -- # '[' -z 2791960 ']' 00:19:32.290 00:53:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.290 00:53:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:32.290 00:53:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.290 00:53:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:32.290 00:53:24 -- common/autotest_common.sh@10 -- # set +x 00:19:32.290 00:53:24 -- target/tls.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:32.290 [2024-04-27 00:53:24.857905] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:32.290 [2024-04-27 00:53:24.858019] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2791960 ] 00:19:32.290 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.290 [2024-04-27 00:53:24.969323] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.549 [2024-04-27 00:53:25.063719] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:33.120 00:53:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:33.120 00:53:25 -- common/autotest_common.sh@850 -- # return 0 00:19:33.120 00:53:25 -- target/tls.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:33.120 [2024-04-27 00:53:25.712354] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.120 [2024-04-27 00:53:25.712418] bdev_nvme.c:6071:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:33.120 [2024-04-27 00:53:25.712431] bdev_nvme.c:6180:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.aACpVS6BpP 00:19:33.120 request: 00:19:33.120 { 00:19:33.120 "name": "TLSTEST", 00:19:33.120 "trtype": "tcp", 00:19:33.120 "traddr": "10.0.0.2", 00:19:33.120 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:33.120 "adrfam": "ipv4", 00:19:33.120 "trsvcid": "4420", 00:19:33.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:33.120 "psk": "/tmp/tmp.aACpVS6BpP", 00:19:33.120 "method": "bdev_nvme_attach_controller", 00:19:33.120 "req_id": 1 00:19:33.120 } 00:19:33.120 Got JSON-RPC error response 00:19:33.120 response: 00:19:33.120 { 00:19:33.120 "code": -1, 00:19:33.120 "message": "Operation not permitted" 00:19:33.120 } 00:19:33.120 00:53:25 -- target/tls.sh@36 -- # killprocess 2791960 00:19:33.120 00:53:25 -- common/autotest_common.sh@936 -- # '[' -z 2791960 ']' 00:19:33.120 00:53:25 -- common/autotest_common.sh@940 -- # kill -0 2791960 00:19:33.120 00:53:25 -- common/autotest_common.sh@941 -- # uname 00:19:33.120 00:53:25 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.120 00:53:25 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2791960 00:19:33.120 00:53:25 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:33.120 00:53:25 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:33.120 00:53:25 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2791960' 00:19:33.120 killing process with pid 2791960 00:19:33.120 00:53:25 -- common/autotest_common.sh@955 -- # kill 2791960 00:19:33.120 Received shutdown signal, test time was about 10.000000 seconds 00:19:33.120 00:19:33.120 Latency(us) 00:19:33.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:33.120 =================================================================================================================== 00:19:33.120 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:33.120 00:53:25 -- common/autotest_common.sh@960 -- # wait 2791960 00:19:33.689 00:53:26 -- target/tls.sh@37 -- # return 1 00:19:33.689 00:53:26 -- common/autotest_common.sh@641 -- # es=1 00:19:33.689 00:53:26 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:33.689 00:53:26 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:33.689 00:53:26 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:33.690 00:53:26 -- target/tls.sh@174 -- # killprocess 2789318 00:19:33.690 00:53:26 -- common/autotest_common.sh@936 -- # '[' -z 2789318 ']' 00:19:33.690 00:53:26 -- common/autotest_common.sh@940 -- # kill -0 2789318 00:19:33.690 00:53:26 -- common/autotest_common.sh@941 -- # uname 00:19:33.690 00:53:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:33.690 00:53:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2789318 00:19:33.690 00:53:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:33.690 00:53:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:33.690 00:53:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2789318' 00:19:33.690 killing process with pid 2789318 00:19:33.690 00:53:26 -- common/autotest_common.sh@955 -- # kill 2789318 00:19:33.690 [2024-04-27 00:53:26.181202] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:33.690 00:53:26 -- common/autotest_common.sh@960 -- # wait 2789318 00:19:34.318 00:53:26 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:34.318 00:53:26 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:34.318 00:53:26 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:34.318 00:53:26 -- common/autotest_common.sh@10 -- # set +x 00:19:34.318 00:53:26 -- nvmf/common.sh@470 -- # nvmfpid=2792278 00:19:34.318 00:53:26 -- nvmf/common.sh@471 -- # waitforlisten 2792278 00:19:34.318 00:53:26 -- common/autotest_common.sh@817 -- # '[' -z 2792278 ']' 00:19:34.318 00:53:26 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:34.318 00:53:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:34.318 00:53:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:34.318 00:53:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:34.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:34.318 00:53:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:34.318 00:53:26 -- common/autotest_common.sh@10 -- # set +x 00:19:34.318 [2024-04-27 00:53:26.770974] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:34.318 [2024-04-27 00:53:26.771089] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.318 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.318 [2024-04-27 00:53:26.881185] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.318 [2024-04-27 00:53:26.977964] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:34.318 [2024-04-27 00:53:26.978000] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:34.318 [2024-04-27 00:53:26.978010] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:34.318 [2024-04-27 00:53:26.978019] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:34.318 [2024-04-27 00:53:26.978026] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:34.318 [2024-04-27 00:53:26.978059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:34.888 00:53:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:34.888 00:53:27 -- common/autotest_common.sh@850 -- # return 0 00:19:34.888 00:53:27 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:34.888 00:53:27 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:34.888 00:53:27 -- common/autotest_common.sh@10 -- # set +x 00:19:34.888 00:53:27 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:34.888 00:53:27 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:34.888 00:53:27 -- common/autotest_common.sh@638 -- # local es=0 00:19:34.888 00:53:27 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:34.888 00:53:27 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:19:34.888 00:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:34.888 00:53:27 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:19:34.888 00:53:27 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:34.888 00:53:27 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:34.888 00:53:27 -- target/tls.sh@49 -- # local key=/tmp/tmp.aACpVS6BpP 00:19:34.888 00:53:27 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:35.148 [2024-04-27 00:53:27.624597] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:35.148 00:53:27 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:35.148 00:53:27 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:35.407 [2024-04-27 00:53:27.924653] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:35.407 [2024-04-27 00:53:27.924901] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:35.407 00:53:27 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:35.407 malloc0 00:19:35.665 00:53:28 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:35.665 00:53:28 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:35.922 [2024-04-27 00:53:28.369684] tcp.c:3562:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:35.922 [2024-04-27 00:53:28.369713] tcp.c:3648:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:35.922 [2024-04-27 00:53:28.369736] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:19:35.922 request: 00:19:35.922 { 00:19:35.922 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.922 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.922 "psk": "/tmp/tmp.aACpVS6BpP", 00:19:35.922 "method": "nvmf_subsystem_add_host", 00:19:35.922 "req_id": 1 00:19:35.922 } 00:19:35.922 Got JSON-RPC error response 00:19:35.922 response: 00:19:35.922 { 00:19:35.922 "code": -32603, 00:19:35.922 "message": "Internal error" 00:19:35.922 } 00:19:35.922 00:53:28 -- common/autotest_common.sh@641 -- # es=1 00:19:35.922 00:53:28 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:35.922 00:53:28 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:35.922 00:53:28 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:35.922 00:53:28 -- target/tls.sh@180 -- # killprocess 2792278 00:19:35.922 00:53:28 -- common/autotest_common.sh@936 -- # '[' -z 2792278 ']' 00:19:35.922 00:53:28 -- common/autotest_common.sh@940 -- # kill -0 2792278 00:19:35.922 00:53:28 -- common/autotest_common.sh@941 -- # uname 00:19:35.922 00:53:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.922 00:53:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792278 00:19:35.922 00:53:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:35.922 00:53:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:35.922 00:53:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792278' 00:19:35.922 killing process with pid 2792278 00:19:35.922 00:53:28 -- common/autotest_common.sh@955 -- # kill 2792278 00:19:35.922 00:53:28 -- common/autotest_common.sh@960 -- # wait 2792278 00:19:36.491 00:53:28 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.aACpVS6BpP 00:19:36.491 00:53:28 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:36.491 00:53:28 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:36.491 00:53:28 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:36.491 00:53:28 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 00:53:28 -- nvmf/common.sh@470 -- # nvmfpid=2792896 00:19:36.491 00:53:28 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:36.491 00:53:28 -- nvmf/common.sh@471 -- # waitforlisten 2792896 00:19:36.491 00:53:28 -- common/autotest_common.sh@817 -- # '[' -z 2792896 ']' 00:19:36.491 00:53:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:36.491 00:53:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:36.491 00:53:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:36.491 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:36.491 00:53:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:36.491 00:53:28 -- common/autotest_common.sh@10 -- # set +x 00:19:36.491 [2024-04-27 00:53:29.006644] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:36.491 [2024-04-27 00:53:29.006724] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:36.491 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.491 [2024-04-27 00:53:29.099426] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.751 [2024-04-27 00:53:29.195645] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.751 [2024-04-27 00:53:29.195682] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.751 [2024-04-27 00:53:29.195692] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.751 [2024-04-27 00:53:29.195702] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.751 [2024-04-27 00:53:29.195710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.751 [2024-04-27 00:53:29.195740] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:37.318 00:53:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:37.318 00:53:29 -- common/autotest_common.sh@850 -- # return 0 00:19:37.318 00:53:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:37.318 00:53:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:37.318 00:53:29 -- common/autotest_common.sh@10 -- # set +x 00:19:37.318 00:53:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.318 00:53:29 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:37.318 00:53:29 -- target/tls.sh@49 -- # local key=/tmp/tmp.aACpVS6BpP 00:19:37.318 00:53:29 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:37.318 [2024-04-27 00:53:29.903548] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:37.318 00:53:29 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:37.576 00:53:30 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:37.576 [2024-04-27 00:53:30.183615] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:37.576 [2024-04-27 00:53:30.183845] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:37.576 00:53:30 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:37.833 malloc0 00:19:37.833 00:53:30 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:37.834 00:53:30 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:38.129 [2024-04-27 00:53:30.599705] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:38.130 00:53:30 -- target/tls.sh@188 -- # bdevperf_pid=2793223 00:19:38.130 00:53:30 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:38.130 00:53:30 -- target/tls.sh@187 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:38.130 00:53:30 -- target/tls.sh@191 -- # waitforlisten 2793223 /var/tmp/bdevperf.sock 00:19:38.130 00:53:30 -- common/autotest_common.sh@817 -- # '[' -z 2793223 ']' 00:19:38.130 00:53:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:38.130 00:53:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:38.130 00:53:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:38.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:38.130 00:53:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:38.130 00:53:30 -- common/autotest_common.sh@10 -- # set +x 00:19:38.130 [2024-04-27 00:53:30.661694] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:38.130 [2024-04-27 00:53:30.661770] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793223 ] 00:19:38.130 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.130 [2024-04-27 00:53:30.747293] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:38.390 [2024-04-27 00:53:30.839170] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.954 00:53:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:38.954 00:53:31 -- common/autotest_common.sh@850 -- # return 0 00:19:38.954 00:53:31 -- target/tls.sh@192 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:38.954 [2024-04-27 00:53:31.524048] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:38.954 [2024-04-27 00:53:31.524169] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:38.954 TLSTESTn1 00:19:38.954 00:53:31 -- target/tls.sh@196 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py save_config 00:19:39.211 00:53:31 -- target/tls.sh@196 -- # tgtconf='{ 00:19:39.211 "subsystems": [ 00:19:39.211 { 00:19:39.211 "subsystem": "keyring", 00:19:39.211 "config": [] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "iobuf", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "iobuf_set_options", 00:19:39.211 "params": { 00:19:39.211 "small_pool_count": 8192, 00:19:39.211 "large_pool_count": 1024, 00:19:39.211 "small_bufsize": 8192, 00:19:39.211 "large_bufsize": 135168 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "sock", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "sock_impl_set_options", 00:19:39.211 "params": { 00:19:39.211 "impl_name": "posix", 00:19:39.211 "recv_buf_size": 2097152, 00:19:39.211 "send_buf_size": 2097152, 00:19:39.211 "enable_recv_pipe": true, 00:19:39.211 "enable_quickack": false, 00:19:39.211 "enable_placement_id": 0, 00:19:39.211 "enable_zerocopy_send_server": true, 00:19:39.211 "enable_zerocopy_send_client": false, 00:19:39.211 "zerocopy_threshold": 0, 00:19:39.211 "tls_version": 0, 00:19:39.211 "enable_ktls": false 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "sock_impl_set_options", 00:19:39.211 "params": { 00:19:39.211 "impl_name": "ssl", 00:19:39.211 "recv_buf_size": 4096, 00:19:39.211 "send_buf_size": 4096, 00:19:39.211 "enable_recv_pipe": true, 00:19:39.211 "enable_quickack": false, 00:19:39.211 "enable_placement_id": 0, 00:19:39.211 "enable_zerocopy_send_server": true, 00:19:39.211 "enable_zerocopy_send_client": false, 00:19:39.211 "zerocopy_threshold": 0, 00:19:39.211 "tls_version": 0, 00:19:39.211 "enable_ktls": false 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "vmd", 00:19:39.211 "config": [] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "accel", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "accel_set_options", 00:19:39.211 "params": { 00:19:39.211 "small_cache_size": 128, 00:19:39.211 "large_cache_size": 16, 00:19:39.211 "task_count": 2048, 00:19:39.211 "sequence_count": 2048, 00:19:39.211 "buf_count": 2048 00:19:39.211 } 00:19:39.211 } 00:19:39.211 ] 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "subsystem": "bdev", 00:19:39.211 "config": [ 00:19:39.211 { 00:19:39.211 "method": "bdev_set_options", 00:19:39.211 "params": { 00:19:39.211 "bdev_io_pool_size": 65535, 00:19:39.211 "bdev_io_cache_size": 256, 00:19:39.211 "bdev_auto_examine": true, 00:19:39.211 "iobuf_small_cache_size": 128, 00:19:39.211 "iobuf_large_cache_size": 16 00:19:39.211 } 00:19:39.211 }, 00:19:39.211 { 00:19:39.211 "method": "bdev_raid_set_options", 00:19:39.212 "params": { 00:19:39.212 "process_window_size_kb": 1024 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_iscsi_set_options", 00:19:39.212 "params": { 00:19:39.212 "timeout_sec": 30 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_nvme_set_options", 00:19:39.212 "params": { 00:19:39.212 "action_on_timeout": "none", 00:19:39.212 "timeout_us": 0, 00:19:39.212 "timeout_admin_us": 0, 00:19:39.212 "keep_alive_timeout_ms": 10000, 00:19:39.212 "arbitration_burst": 0, 00:19:39.212 "low_priority_weight": 0, 00:19:39.212 "medium_priority_weight": 0, 00:19:39.212 "high_priority_weight": 0, 00:19:39.212 "nvme_adminq_poll_period_us": 10000, 00:19:39.212 "nvme_ioq_poll_period_us": 0, 00:19:39.212 "io_queue_requests": 0, 00:19:39.212 "delay_cmd_submit": true, 00:19:39.212 "transport_retry_count": 4, 00:19:39.212 "bdev_retry_count": 3, 00:19:39.212 "transport_ack_timeout": 0, 00:19:39.212 "ctrlr_loss_timeout_sec": 0, 00:19:39.212 "reconnect_delay_sec": 0, 00:19:39.212 "fast_io_fail_timeout_sec": 0, 00:19:39.212 "disable_auto_failback": false, 00:19:39.212 "generate_uuids": false, 00:19:39.212 "transport_tos": 0, 00:19:39.212 "nvme_error_stat": false, 00:19:39.212 "rdma_srq_size": 0, 00:19:39.212 "io_path_stat": false, 00:19:39.212 "allow_accel_sequence": false, 00:19:39.212 "rdma_max_cq_size": 0, 00:19:39.212 "rdma_cm_event_timeout_ms": 0, 00:19:39.212 "dhchap_digests": [ 00:19:39.212 "sha256", 00:19:39.212 "sha384", 00:19:39.212 "sha512" 00:19:39.212 ], 00:19:39.212 "dhchap_dhgroups": [ 00:19:39.212 "null", 00:19:39.212 "ffdhe2048", 00:19:39.212 "ffdhe3072", 00:19:39.212 "ffdhe4096", 00:19:39.212 "ffdhe6144", 00:19:39.212 "ffdhe8192" 00:19:39.212 ] 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_nvme_set_hotplug", 00:19:39.212 "params": { 00:19:39.212 "period_us": 100000, 00:19:39.212 "enable": false 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_malloc_create", 00:19:39.212 "params": { 00:19:39.212 "name": "malloc0", 00:19:39.212 "num_blocks": 8192, 00:19:39.212 "block_size": 4096, 00:19:39.212 "physical_block_size": 4096, 00:19:39.212 "uuid": "72c97bb9-4313-4123-8687-48ead1f39b7f", 00:19:39.212 "optimal_io_boundary": 0 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "bdev_wait_for_examine" 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "subsystem": "nbd", 00:19:39.212 "config": [] 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "subsystem": "scheduler", 00:19:39.212 "config": [ 00:19:39.212 { 00:19:39.212 "method": "framework_set_scheduler", 00:19:39.212 "params": { 00:19:39.212 "name": "static" 00:19:39.212 } 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "subsystem": "nvmf", 00:19:39.212 "config": [ 00:19:39.212 { 00:19:39.212 "method": "nvmf_set_config", 00:19:39.212 "params": { 00:19:39.212 "discovery_filter": "match_any", 00:19:39.212 "admin_cmd_passthru": { 00:19:39.212 "identify_ctrlr": false 00:19:39.212 } 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_set_max_subsystems", 00:19:39.212 "params": { 00:19:39.212 "max_subsystems": 1024 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_set_crdt", 00:19:39.212 "params": { 00:19:39.212 "crdt1": 0, 00:19:39.212 "crdt2": 0, 00:19:39.212 "crdt3": 0 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_create_transport", 00:19:39.212 "params": { 00:19:39.212 "trtype": "TCP", 00:19:39.212 "max_queue_depth": 128, 00:19:39.212 "max_io_qpairs_per_ctrlr": 127, 00:19:39.212 "in_capsule_data_size": 4096, 00:19:39.212 "max_io_size": 131072, 00:19:39.212 "io_unit_size": 131072, 00:19:39.212 "max_aq_depth": 128, 00:19:39.212 "num_shared_buffers": 511, 00:19:39.212 "buf_cache_size": 4294967295, 00:19:39.212 "dif_insert_or_strip": false, 00:19:39.212 "zcopy": false, 00:19:39.212 "c2h_success": false, 00:19:39.212 "sock_priority": 0, 00:19:39.212 "abort_timeout_sec": 1, 00:19:39.212 "ack_timeout": 0, 00:19:39.212 "data_wr_pool_size": 0 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_create_subsystem", 00:19:39.212 "params": { 00:19:39.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.212 "allow_any_host": false, 00:19:39.212 "serial_number": "SPDK00000000000001", 00:19:39.212 "model_number": "SPDK bdev Controller", 00:19:39.212 "max_namespaces": 10, 00:19:39.212 "min_cntlid": 1, 00:19:39.212 "max_cntlid": 65519, 00:19:39.212 "ana_reporting": false 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_subsystem_add_host", 00:19:39.212 "params": { 00:19:39.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.212 "host": "nqn.2016-06.io.spdk:host1", 00:19:39.212 "psk": "/tmp/tmp.aACpVS6BpP" 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_subsystem_add_ns", 00:19:39.212 "params": { 00:19:39.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.212 "namespace": { 00:19:39.212 "nsid": 1, 00:19:39.212 "bdev_name": "malloc0", 00:19:39.212 "nguid": "72C97BB943134123868748EAD1F39B7F", 00:19:39.212 "uuid": "72c97bb9-4313-4123-8687-48ead1f39b7f", 00:19:39.212 "no_auto_visible": false 00:19:39.212 } 00:19:39.212 } 00:19:39.212 }, 00:19:39.212 { 00:19:39.212 "method": "nvmf_subsystem_add_listener", 00:19:39.212 "params": { 00:19:39.212 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.212 "listen_address": { 00:19:39.212 "trtype": "TCP", 00:19:39.212 "adrfam": "IPv4", 00:19:39.212 "traddr": "10.0.0.2", 00:19:39.212 "trsvcid": "4420" 00:19:39.212 }, 00:19:39.212 "secure_channel": true 00:19:39.212 } 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 } 00:19:39.212 ] 00:19:39.212 }' 00:19:39.212 00:53:31 -- target/tls.sh@197 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:39.470 00:53:32 -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:39.470 "subsystems": [ 00:19:39.470 { 00:19:39.470 "subsystem": "keyring", 00:19:39.470 "config": [] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "iobuf", 00:19:39.470 "config": [ 00:19:39.470 { 00:19:39.470 "method": "iobuf_set_options", 00:19:39.470 "params": { 00:19:39.470 "small_pool_count": 8192, 00:19:39.470 "large_pool_count": 1024, 00:19:39.470 "small_bufsize": 8192, 00:19:39.470 "large_bufsize": 135168 00:19:39.470 } 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "sock", 00:19:39.470 "config": [ 00:19:39.470 { 00:19:39.470 "method": "sock_impl_set_options", 00:19:39.470 "params": { 00:19:39.470 "impl_name": "posix", 00:19:39.470 "recv_buf_size": 2097152, 00:19:39.470 "send_buf_size": 2097152, 00:19:39.470 "enable_recv_pipe": true, 00:19:39.470 "enable_quickack": false, 00:19:39.470 "enable_placement_id": 0, 00:19:39.470 "enable_zerocopy_send_server": true, 00:19:39.470 "enable_zerocopy_send_client": false, 00:19:39.470 "zerocopy_threshold": 0, 00:19:39.470 "tls_version": 0, 00:19:39.470 "enable_ktls": false 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "sock_impl_set_options", 00:19:39.470 "params": { 00:19:39.470 "impl_name": "ssl", 00:19:39.470 "recv_buf_size": 4096, 00:19:39.470 "send_buf_size": 4096, 00:19:39.470 "enable_recv_pipe": true, 00:19:39.470 "enable_quickack": false, 00:19:39.470 "enable_placement_id": 0, 00:19:39.470 "enable_zerocopy_send_server": true, 00:19:39.470 "enable_zerocopy_send_client": false, 00:19:39.470 "zerocopy_threshold": 0, 00:19:39.470 "tls_version": 0, 00:19:39.470 "enable_ktls": false 00:19:39.470 } 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "vmd", 00:19:39.470 "config": [] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "accel", 00:19:39.470 "config": [ 00:19:39.470 { 00:19:39.470 "method": "accel_set_options", 00:19:39.470 "params": { 00:19:39.470 "small_cache_size": 128, 00:19:39.470 "large_cache_size": 16, 00:19:39.470 "task_count": 2048, 00:19:39.470 "sequence_count": 2048, 00:19:39.470 "buf_count": 2048 00:19:39.470 } 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "bdev", 00:19:39.470 "config": [ 00:19:39.470 { 00:19:39.470 "method": "bdev_set_options", 00:19:39.470 "params": { 00:19:39.470 "bdev_io_pool_size": 65535, 00:19:39.470 "bdev_io_cache_size": 256, 00:19:39.470 "bdev_auto_examine": true, 00:19:39.470 "iobuf_small_cache_size": 128, 00:19:39.470 "iobuf_large_cache_size": 16 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_raid_set_options", 00:19:39.470 "params": { 00:19:39.470 "process_window_size_kb": 1024 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_iscsi_set_options", 00:19:39.470 "params": { 00:19:39.470 "timeout_sec": 30 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_nvme_set_options", 00:19:39.470 "params": { 00:19:39.470 "action_on_timeout": "none", 00:19:39.470 "timeout_us": 0, 00:19:39.470 "timeout_admin_us": 0, 00:19:39.470 "keep_alive_timeout_ms": 10000, 00:19:39.470 "arbitration_burst": 0, 00:19:39.470 "low_priority_weight": 0, 00:19:39.470 "medium_priority_weight": 0, 00:19:39.470 "high_priority_weight": 0, 00:19:39.470 "nvme_adminq_poll_period_us": 10000, 00:19:39.470 "nvme_ioq_poll_period_us": 0, 00:19:39.470 "io_queue_requests": 512, 00:19:39.470 "delay_cmd_submit": true, 00:19:39.470 "transport_retry_count": 4, 00:19:39.470 "bdev_retry_count": 3, 00:19:39.470 "transport_ack_timeout": 0, 00:19:39.470 "ctrlr_loss_timeout_sec": 0, 00:19:39.470 "reconnect_delay_sec": 0, 00:19:39.470 "fast_io_fail_timeout_sec": 0, 00:19:39.470 "disable_auto_failback": false, 00:19:39.470 "generate_uuids": false, 00:19:39.470 "transport_tos": 0, 00:19:39.470 "nvme_error_stat": false, 00:19:39.470 "rdma_srq_size": 0, 00:19:39.470 "io_path_stat": false, 00:19:39.470 "allow_accel_sequence": false, 00:19:39.470 "rdma_max_cq_size": 0, 00:19:39.470 "rdma_cm_event_timeout_ms": 0, 00:19:39.470 "dhchap_digests": [ 00:19:39.470 "sha256", 00:19:39.470 "sha384", 00:19:39.470 "sha512" 00:19:39.470 ], 00:19:39.470 "dhchap_dhgroups": [ 00:19:39.470 "null", 00:19:39.470 "ffdhe2048", 00:19:39.470 "ffdhe3072", 00:19:39.470 "ffdhe4096", 00:19:39.470 "ffdhe6144", 00:19:39.470 "ffdhe8192" 00:19:39.470 ] 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_nvme_attach_controller", 00:19:39.470 "params": { 00:19:39.470 "name": "TLSTEST", 00:19:39.470 "trtype": "TCP", 00:19:39.470 "adrfam": "IPv4", 00:19:39.470 "traddr": "10.0.0.2", 00:19:39.470 "trsvcid": "4420", 00:19:39.470 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:39.470 "prchk_reftag": false, 00:19:39.470 "prchk_guard": false, 00:19:39.470 "ctrlr_loss_timeout_sec": 0, 00:19:39.470 "reconnect_delay_sec": 0, 00:19:39.470 "fast_io_fail_timeout_sec": 0, 00:19:39.470 "psk": "/tmp/tmp.aACpVS6BpP", 00:19:39.470 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:39.470 "hdgst": false, 00:19:39.470 "ddgst": false 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_nvme_set_hotplug", 00:19:39.470 "params": { 00:19:39.470 "period_us": 100000, 00:19:39.470 "enable": false 00:19:39.470 } 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "method": "bdev_wait_for_examine" 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 }, 00:19:39.470 { 00:19:39.470 "subsystem": "nbd", 00:19:39.470 "config": [] 00:19:39.470 } 00:19:39.470 ] 00:19:39.470 }' 00:19:39.470 00:53:32 -- target/tls.sh@199 -- # killprocess 2793223 00:19:39.470 00:53:32 -- common/autotest_common.sh@936 -- # '[' -z 2793223 ']' 00:19:39.470 00:53:32 -- common/autotest_common.sh@940 -- # kill -0 2793223 00:19:39.470 00:53:32 -- common/autotest_common.sh@941 -- # uname 00:19:39.470 00:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.470 00:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2793223 00:19:39.470 00:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:39.470 00:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:39.470 00:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2793223' 00:19:39.470 killing process with pid 2793223 00:19:39.470 00:53:32 -- common/autotest_common.sh@955 -- # kill 2793223 00:19:39.470 Received shutdown signal, test time was about 10.000000 seconds 00:19:39.470 00:19:39.470 Latency(us) 00:19:39.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:39.470 =================================================================================================================== 00:19:39.470 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:39.471 [2024-04-27 00:53:32.061956] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:39.471 00:53:32 -- common/autotest_common.sh@960 -- # wait 2793223 00:19:39.728 00:53:32 -- target/tls.sh@200 -- # killprocess 2792896 00:19:39.728 00:53:32 -- common/autotest_common.sh@936 -- # '[' -z 2792896 ']' 00:19:39.728 00:53:32 -- common/autotest_common.sh@940 -- # kill -0 2792896 00:19:39.728 00:53:32 -- common/autotest_common.sh@941 -- # uname 00:19:39.728 00:53:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.988 00:53:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2792896 00:19:39.988 00:53:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:39.988 00:53:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:39.988 00:53:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2792896' 00:19:39.988 killing process with pid 2792896 00:19:39.988 00:53:32 -- common/autotest_common.sh@955 -- # kill 2792896 00:19:39.988 [2024-04-27 00:53:32.459524] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:39.988 00:53:32 -- common/autotest_common.sh@960 -- # wait 2792896 00:19:40.249 00:53:32 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:40.249 00:53:32 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:40.249 00:53:32 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:40.249 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:40.249 00:53:32 -- target/tls.sh@203 -- # echo '{ 00:19:40.249 "subsystems": [ 00:19:40.249 { 00:19:40.249 "subsystem": "keyring", 00:19:40.249 "config": [] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "iobuf", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "iobuf_set_options", 00:19:40.249 "params": { 00:19:40.249 "small_pool_count": 8192, 00:19:40.249 "large_pool_count": 1024, 00:19:40.249 "small_bufsize": 8192, 00:19:40.249 "large_bufsize": 135168 00:19:40.249 } 00:19:40.249 } 00:19:40.249 ] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "sock", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "sock_impl_set_options", 00:19:40.249 "params": { 00:19:40.249 "impl_name": "posix", 00:19:40.249 "recv_buf_size": 2097152, 00:19:40.249 "send_buf_size": 2097152, 00:19:40.249 "enable_recv_pipe": true, 00:19:40.249 "enable_quickack": false, 00:19:40.249 "enable_placement_id": 0, 00:19:40.249 "enable_zerocopy_send_server": true, 00:19:40.249 "enable_zerocopy_send_client": false, 00:19:40.249 "zerocopy_threshold": 0, 00:19:40.249 "tls_version": 0, 00:19:40.249 "enable_ktls": false 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "sock_impl_set_options", 00:19:40.249 "params": { 00:19:40.249 "impl_name": "ssl", 00:19:40.249 "recv_buf_size": 4096, 00:19:40.249 "send_buf_size": 4096, 00:19:40.249 "enable_recv_pipe": true, 00:19:40.249 "enable_quickack": false, 00:19:40.249 "enable_placement_id": 0, 00:19:40.249 "enable_zerocopy_send_server": true, 00:19:40.249 "enable_zerocopy_send_client": false, 00:19:40.249 "zerocopy_threshold": 0, 00:19:40.249 "tls_version": 0, 00:19:40.249 "enable_ktls": false 00:19:40.249 } 00:19:40.249 } 00:19:40.249 ] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "vmd", 00:19:40.249 "config": [] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "accel", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "accel_set_options", 00:19:40.249 "params": { 00:19:40.249 "small_cache_size": 128, 00:19:40.249 "large_cache_size": 16, 00:19:40.249 "task_count": 2048, 00:19:40.249 "sequence_count": 2048, 00:19:40.249 "buf_count": 2048 00:19:40.249 } 00:19:40.249 } 00:19:40.249 ] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "bdev", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "bdev_set_options", 00:19:40.249 "params": { 00:19:40.249 "bdev_io_pool_size": 65535, 00:19:40.249 "bdev_io_cache_size": 256, 00:19:40.249 "bdev_auto_examine": true, 00:19:40.249 "iobuf_small_cache_size": 128, 00:19:40.249 "iobuf_large_cache_size": 16 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_raid_set_options", 00:19:40.249 "params": { 00:19:40.249 "process_window_size_kb": 1024 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_iscsi_set_options", 00:19:40.249 "params": { 00:19:40.249 "timeout_sec": 30 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_nvme_set_options", 00:19:40.249 "params": { 00:19:40.249 "action_on_timeout": "none", 00:19:40.249 "timeout_us": 0, 00:19:40.249 "timeout_admin_us": 0, 00:19:40.249 "keep_alive_timeout_ms": 10000, 00:19:40.249 "arbitration_burst": 0, 00:19:40.249 "low_priority_weight": 0, 00:19:40.249 "medium_priority_weight": 0, 00:19:40.249 "high_priority_weight": 0, 00:19:40.249 "nvme_adminq_poll_period_us": 10000, 00:19:40.249 "nvme_ioq_poll_period_us": 0, 00:19:40.249 "io_queue_requests": 0, 00:19:40.249 "delay_cmd_submit": true, 00:19:40.249 "transport_retry_count": 4, 00:19:40.249 "bdev_retry_count": 3, 00:19:40.249 "transport_ack_timeout": 0, 00:19:40.249 "ctrlr_loss_timeout_sec": 0, 00:19:40.249 "reconnect_delay_sec": 0, 00:19:40.249 "fast_io_fail_timeout_sec": 0, 00:19:40.249 "disable_auto_failback": false, 00:19:40.249 "generate_uuids": false, 00:19:40.249 "transport_tos": 0, 00:19:40.249 "nvme_error_stat": false, 00:19:40.249 "rdma_srq_size": 0, 00:19:40.249 "io_path_stat": false, 00:19:40.249 "allow_accel_sequence": false, 00:19:40.249 "rdma_max_cq_size": 0, 00:19:40.249 "rdma_cm_event_timeout_ms": 0, 00:19:40.249 "dhchap_digests": [ 00:19:40.249 "sha256", 00:19:40.249 "sha384", 00:19:40.249 "sha512" 00:19:40.249 ], 00:19:40.249 "dhchap_dhgroups": [ 00:19:40.249 "null", 00:19:40.249 "ffdhe2048", 00:19:40.249 "ffdhe3072", 00:19:40.249 "ffdhe4096", 00:19:40.249 "ffdhe6144", 00:19:40.249 "ffdhe8192" 00:19:40.249 ] 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_nvme_set_hotplug", 00:19:40.249 "params": { 00:19:40.249 "period_us": 100000, 00:19:40.249 "enable": false 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_malloc_create", 00:19:40.249 "params": { 00:19:40.249 "name": "malloc0", 00:19:40.249 "num_blocks": 8192, 00:19:40.249 "block_size": 4096, 00:19:40.249 "physical_block_size": 4096, 00:19:40.249 "uuid": "72c97bb9-4313-4123-8687-48ead1f39b7f", 00:19:40.249 "optimal_io_boundary": 0 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "bdev_wait_for_examine" 00:19:40.249 } 00:19:40.249 ] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "nbd", 00:19:40.249 "config": [] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "scheduler", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "framework_set_scheduler", 00:19:40.249 "params": { 00:19:40.249 "name": "static" 00:19:40.249 } 00:19:40.249 } 00:19:40.249 ] 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "subsystem": "nvmf", 00:19:40.249 "config": [ 00:19:40.249 { 00:19:40.249 "method": "nvmf_set_config", 00:19:40.249 "params": { 00:19:40.249 "discovery_filter": "match_any", 00:19:40.249 "admin_cmd_passthru": { 00:19:40.249 "identify_ctrlr": false 00:19:40.249 } 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "nvmf_set_max_subsystems", 00:19:40.249 "params": { 00:19:40.249 "max_subsystems": 1024 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "nvmf_set_crdt", 00:19:40.249 "params": { 00:19:40.249 "crdt1": 0, 00:19:40.249 "crdt2": 0, 00:19:40.249 "crdt3": 0 00:19:40.249 } 00:19:40.249 }, 00:19:40.249 { 00:19:40.249 "method": "nvmf_create_transport", 00:19:40.249 "params": { 00:19:40.249 "trtype": "TCP", 00:19:40.249 "max_queue_depth": 128, 00:19:40.249 "max_io_qpairs_per_ctrlr": 127, 00:19:40.249 "in_capsule_data_size": 4096, 00:19:40.249 "max_io_size": 131072, 00:19:40.250 "io_unit_size": 131072, 00:19:40.250 "max_aq_depth": 128, 00:19:40.250 "num_shared_buffers": 511, 00:19:40.250 "buf_cache_size": 4294967295, 00:19:40.250 "dif_insert_or_strip": false, 00:19:40.250 "zcopy": false, 00:19:40.250 "c2h_success": false, 00:19:40.250 "sock_priority": 0, 00:19:40.250 "abort_timeout_sec": 1, 00:19:40.250 "ack_timeout": 0, 00:19:40.250 "data_wr_pool_size": 0 00:19:40.250 } 00:19:40.250 }, 00:19:40.250 { 00:19:40.250 "method": "nvmf_create_subsystem", 00:19:40.250 "params": { 00:19:40.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.250 "allow_any_host": false, 00:19:40.250 "serial_number": "SPDK00000000000001", 00:19:40.250 "model_number": "SPDK bdev Controller", 00:19:40.250 "max_namespaces": 10, 00:19:40.250 "min_cntlid": 1, 00:19:40.250 "max_cntlid": 65519, 00:19:40.250 "ana_reporting": false 00:19:40.250 } 00:19:40.250 }, 00:19:40.250 { 00:19:40.250 "method": "nvmf_subsystem_add_host", 00:19:40.250 "params": { 00:19:40.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.250 "host": "nqn.2016-06.io.spdk:host1", 00:19:40.250 "psk": "/tmp/tmp.aACpVS6BpP" 00:19:40.250 } 00:19:40.250 }, 00:19:40.250 { 00:19:40.250 "method": "nvmf_subsystem_add_ns", 00:19:40.250 "params": { 00:19:40.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.250 "namespace": { 00:19:40.250 "nsid": 1, 00:19:40.250 "bdev_name": "malloc0", 00:19:40.250 "nguid": "72C97BB943134123868748EAD1F39B7F", 00:19:40.250 "uuid": "72c97bb9-4313-4123-8687-48ead1f39b7f", 00:19:40.250 "no_auto_visible": false 00:19:40.250 } 00:19:40.250 } 00:19:40.250 }, 00:19:40.250 { 00:19:40.250 "method": "nvmf_subsystem_add_listener", 00:19:40.250 "params": { 00:19:40.250 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.250 "listen_address": { 00:19:40.250 "trtype": "TCP", 00:19:40.250 "adrfam": "IPv4", 00:19:40.250 "traddr": "10.0.0.2", 00:19:40.250 "trsvcid": "4420" 00:19:40.250 }, 00:19:40.250 "secure_channel": true 00:19:40.250 } 00:19:40.250 } 00:19:40.250 ] 00:19:40.250 } 00:19:40.250 ] 00:19:40.250 }' 00:19:40.508 00:53:32 -- nvmf/common.sh@470 -- # nvmfpid=2793642 00:19:40.508 00:53:32 -- nvmf/common.sh@471 -- # waitforlisten 2793642 00:19:40.508 00:53:32 -- common/autotest_common.sh@817 -- # '[' -z 2793642 ']' 00:19:40.508 00:53:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.508 00:53:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.508 00:53:32 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:40.508 00:53:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.508 00:53:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.508 00:53:32 -- common/autotest_common.sh@10 -- # set +x 00:19:40.508 [2024-04-27 00:53:33.030371] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:40.508 [2024-04-27 00:53:33.030491] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.508 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.508 [2024-04-27 00:53:33.157945] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.768 [2024-04-27 00:53:33.257314] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:40.768 [2024-04-27 00:53:33.257356] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:40.768 [2024-04-27 00:53:33.257366] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:40.768 [2024-04-27 00:53:33.257377] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:40.768 [2024-04-27 00:53:33.257385] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:40.768 [2024-04-27 00:53:33.257481] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.026 [2024-04-27 00:53:33.555169] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.026 [2024-04-27 00:53:33.571109] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:41.026 [2024-04-27 00:53:33.587118] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:41.026 [2024-04-27 00:53:33.587349] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:41.026 00:53:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.026 00:53:33 -- common/autotest_common.sh@850 -- # return 0 00:19:41.026 00:53:33 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:41.026 00:53:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:41.026 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:19:41.285 00:53:33 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.285 00:53:33 -- target/tls.sh@207 -- # bdevperf_pid=2793862 00:19:41.285 00:53:33 -- target/tls.sh@208 -- # waitforlisten 2793862 /var/tmp/bdevperf.sock 00:19:41.285 00:53:33 -- common/autotest_common.sh@817 -- # '[' -z 2793862 ']' 00:19:41.285 00:53:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:41.285 00:53:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:41.285 00:53:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:41.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:41.285 00:53:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:41.285 00:53:33 -- target/tls.sh@204 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:41.285 00:53:33 -- common/autotest_common.sh@10 -- # set +x 00:19:41.285 00:53:33 -- target/tls.sh@204 -- # echo '{ 00:19:41.285 "subsystems": [ 00:19:41.285 { 00:19:41.285 "subsystem": "keyring", 00:19:41.285 "config": [] 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "subsystem": "iobuf", 00:19:41.285 "config": [ 00:19:41.285 { 00:19:41.285 "method": "iobuf_set_options", 00:19:41.285 "params": { 00:19:41.285 "small_pool_count": 8192, 00:19:41.285 "large_pool_count": 1024, 00:19:41.285 "small_bufsize": 8192, 00:19:41.285 "large_bufsize": 135168 00:19:41.285 } 00:19:41.285 } 00:19:41.285 ] 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "subsystem": "sock", 00:19:41.285 "config": [ 00:19:41.285 { 00:19:41.285 "method": "sock_impl_set_options", 00:19:41.285 "params": { 00:19:41.285 "impl_name": "posix", 00:19:41.285 "recv_buf_size": 2097152, 00:19:41.285 "send_buf_size": 2097152, 00:19:41.285 "enable_recv_pipe": true, 00:19:41.285 "enable_quickack": false, 00:19:41.285 "enable_placement_id": 0, 00:19:41.285 "enable_zerocopy_send_server": true, 00:19:41.285 "enable_zerocopy_send_client": false, 00:19:41.285 "zerocopy_threshold": 0, 00:19:41.285 "tls_version": 0, 00:19:41.285 "enable_ktls": false 00:19:41.285 } 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "method": "sock_impl_set_options", 00:19:41.285 "params": { 00:19:41.285 "impl_name": "ssl", 00:19:41.285 "recv_buf_size": 4096, 00:19:41.285 "send_buf_size": 4096, 00:19:41.285 "enable_recv_pipe": true, 00:19:41.285 "enable_quickack": false, 00:19:41.285 "enable_placement_id": 0, 00:19:41.285 "enable_zerocopy_send_server": true, 00:19:41.285 "enable_zerocopy_send_client": false, 00:19:41.285 "zerocopy_threshold": 0, 00:19:41.285 "tls_version": 0, 00:19:41.285 "enable_ktls": false 00:19:41.285 } 00:19:41.285 } 00:19:41.285 ] 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "subsystem": "vmd", 00:19:41.285 "config": [] 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "subsystem": "accel", 00:19:41.285 "config": [ 00:19:41.285 { 00:19:41.285 "method": "accel_set_options", 00:19:41.285 "params": { 00:19:41.285 "small_cache_size": 128, 00:19:41.285 "large_cache_size": 16, 00:19:41.285 "task_count": 2048, 00:19:41.285 "sequence_count": 2048, 00:19:41.285 "buf_count": 2048 00:19:41.285 } 00:19:41.285 } 00:19:41.285 ] 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "subsystem": "bdev", 00:19:41.285 "config": [ 00:19:41.285 { 00:19:41.285 "method": "bdev_set_options", 00:19:41.285 "params": { 00:19:41.285 "bdev_io_pool_size": 65535, 00:19:41.285 "bdev_io_cache_size": 256, 00:19:41.285 "bdev_auto_examine": true, 00:19:41.285 "iobuf_small_cache_size": 128, 00:19:41.285 "iobuf_large_cache_size": 16 00:19:41.285 } 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "method": "bdev_raid_set_options", 00:19:41.285 "params": { 00:19:41.285 "process_window_size_kb": 1024 00:19:41.285 } 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "method": "bdev_iscsi_set_options", 00:19:41.285 "params": { 00:19:41.285 "timeout_sec": 30 00:19:41.285 } 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "method": "bdev_nvme_set_options", 00:19:41.285 "params": { 00:19:41.285 "action_on_timeout": "none", 00:19:41.285 "timeout_us": 0, 00:19:41.285 "timeout_admin_us": 0, 00:19:41.285 "keep_alive_timeout_ms": 10000, 00:19:41.285 "arbitration_burst": 0, 00:19:41.285 "low_priority_weight": 0, 00:19:41.285 "medium_priority_weight": 0, 00:19:41.285 "high_priority_weight": 0, 00:19:41.285 "nvme_adminq_poll_period_us": 10000, 00:19:41.285 "nvme_ioq_poll_period_us": 0, 00:19:41.285 "io_queue_requests": 512, 00:19:41.285 "delay_cmd_submit": true, 00:19:41.285 "transport_retry_count": 4, 00:19:41.285 "bdev_retry_count": 3, 00:19:41.285 "transport_ack_timeout": 0, 00:19:41.285 "ctrlr_loss_timeout_sec": 0, 00:19:41.285 "reconnect_delay_sec": 0, 00:19:41.285 "fast_io_fail_timeout_sec": 0, 00:19:41.285 "disable_auto_failback": false, 00:19:41.285 "generate_uuids": false, 00:19:41.285 "transport_tos": 0, 00:19:41.285 "nvme_error_stat": false, 00:19:41.285 "rdma_srq_size": 0, 00:19:41.285 "io_path_stat": false, 00:19:41.285 "allow_accel_sequence": false, 00:19:41.285 "rdma_max_cq_size": 0, 00:19:41.285 "rdma_cm_event_timeout_ms": 0, 00:19:41.285 "dhchap_digests": [ 00:19:41.285 "sha256", 00:19:41.285 "sha384", 00:19:41.285 "sha512" 00:19:41.285 ], 00:19:41.285 "dhchap_dhgroups": [ 00:19:41.285 "null", 00:19:41.285 "ffdhe2048", 00:19:41.285 "ffdhe3072", 00:19:41.285 "ffdhe4096", 00:19:41.285 "ffdhe6144", 00:19:41.285 "ffdhe8192" 00:19:41.285 ] 00:19:41.285 } 00:19:41.285 }, 00:19:41.285 { 00:19:41.285 "method": "bdev_nvme_attach_controller", 00:19:41.285 "params": { 00:19:41.285 "name": "TLSTEST", 00:19:41.285 "trtype": "TCP", 00:19:41.285 "adrfam": "IPv4", 00:19:41.285 "traddr": "10.0.0.2", 00:19:41.285 "trsvcid": "4420", 00:19:41.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:41.285 "prchk_reftag": false, 00:19:41.285 "prchk_guard": false, 00:19:41.285 "ctrlr_loss_timeout_sec": 0, 00:19:41.285 "reconnect_delay_sec": 0, 00:19:41.285 "fast_io_fail_timeout_sec": 0, 00:19:41.285 "psk": "/tmp/tmp.aACpVS6BpP", 00:19:41.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:41.286 "hdgst": false, 00:19:41.286 "ddgst": false 00:19:41.286 } 00:19:41.286 }, 00:19:41.286 { 00:19:41.286 "method": "bdev_nvme_set_hotplug", 00:19:41.286 "params": { 00:19:41.286 "period_us": 100000, 00:19:41.286 "enable": false 00:19:41.286 } 00:19:41.286 }, 00:19:41.286 { 00:19:41.286 "method": "bdev_wait_for_examine" 00:19:41.286 } 00:19:41.286 ] 00:19:41.286 }, 00:19:41.286 { 00:19:41.286 "subsystem": "nbd", 00:19:41.286 "config": [] 00:19:41.286 } 00:19:41.286 ] 00:19:41.286 }' 00:19:41.286 [2024-04-27 00:53:33.813698] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:41.286 [2024-04-27 00:53:33.813807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2793862 ] 00:19:41.286 EAL: No free 2048 kB hugepages reported on node 1 00:19:41.286 [2024-04-27 00:53:33.924994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:41.544 [2024-04-27 00:53:34.018803] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.544 [2024-04-27 00:53:34.228888] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:41.544 [2024-04-27 00:53:34.228991] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:42.111 00:53:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:42.111 00:53:34 -- common/autotest_common.sh@850 -- # return 0 00:19:42.111 00:53:34 -- target/tls.sh@211 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:42.111 Running I/O for 10 seconds... 00:19:52.093 00:19:52.093 Latency(us) 00:19:52.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:52.093 Verification LBA range: start 0x0 length 0x2000 00:19:52.093 TLSTESTn1 : 10.01 5587.94 21.83 0.00 0.00 22873.36 5346.36 44978.39 00:19:52.093 =================================================================================================================== 00:19:52.093 Total : 5587.94 21.83 0.00 0.00 22873.36 5346.36 44978.39 00:19:52.093 0 00:19:52.093 00:53:44 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:52.093 00:53:44 -- target/tls.sh@214 -- # killprocess 2793862 00:19:52.093 00:53:44 -- common/autotest_common.sh@936 -- # '[' -z 2793862 ']' 00:19:52.093 00:53:44 -- common/autotest_common.sh@940 -- # kill -0 2793862 00:19:52.093 00:53:44 -- common/autotest_common.sh@941 -- # uname 00:19:52.093 00:53:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.093 00:53:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2793862 00:19:52.093 00:53:44 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:19:52.093 00:53:44 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:19:52.093 00:53:44 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2793862' 00:19:52.093 killing process with pid 2793862 00:19:52.093 00:53:44 -- common/autotest_common.sh@955 -- # kill 2793862 00:19:52.093 Received shutdown signal, test time was about 10.000000 seconds 00:19:52.093 00:19:52.093 Latency(us) 00:19:52.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.093 =================================================================================================================== 00:19:52.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:52.093 [2024-04-27 00:53:44.647206] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:52.093 00:53:44 -- common/autotest_common.sh@960 -- # wait 2793862 00:19:52.351 00:53:45 -- target/tls.sh@215 -- # killprocess 2793642 00:19:52.351 00:53:45 -- common/autotest_common.sh@936 -- # '[' -z 2793642 ']' 00:19:52.351 00:53:45 -- common/autotest_common.sh@940 -- # kill -0 2793642 00:19:52.351 00:53:45 -- common/autotest_common.sh@941 -- # uname 00:19:52.351 00:53:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:52.351 00:53:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2793642 00:19:52.611 00:53:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:52.611 00:53:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:52.611 00:53:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2793642' 00:19:52.611 killing process with pid 2793642 00:19:52.611 00:53:45 -- common/autotest_common.sh@955 -- # kill 2793642 00:19:52.611 [2024-04-27 00:53:45.068258] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:52.611 00:53:45 -- common/autotest_common.sh@960 -- # wait 2793642 00:19:53.179 00:53:45 -- target/tls.sh@218 -- # nvmfappstart 00:19:53.179 00:53:45 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:53.179 00:53:45 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:53.179 00:53:45 -- common/autotest_common.sh@10 -- # set +x 00:19:53.179 00:53:45 -- nvmf/common.sh@470 -- # nvmfpid=2796242 00:19:53.179 00:53:45 -- nvmf/common.sh@471 -- # waitforlisten 2796242 00:19:53.179 00:53:45 -- common/autotest_common.sh@817 -- # '[' -z 2796242 ']' 00:19:53.179 00:53:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.179 00:53:45 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:53.179 00:53:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:53.179 00:53:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.179 00:53:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:53.179 00:53:45 -- common/autotest_common.sh@10 -- # set +x 00:19:53.179 [2024-04-27 00:53:45.715109] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:53.179 [2024-04-27 00:53:45.715250] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.179 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.179 [2024-04-27 00:53:45.854015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.438 [2024-04-27 00:53:45.945320] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.438 [2024-04-27 00:53:45.945368] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.438 [2024-04-27 00:53:45.945378] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.438 [2024-04-27 00:53:45.945388] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.438 [2024-04-27 00:53:45.945396] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.438 [2024-04-27 00:53:45.945441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.005 00:53:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:54.005 00:53:46 -- common/autotest_common.sh@850 -- # return 0 00:19:54.005 00:53:46 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:54.005 00:53:46 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:54.005 00:53:46 -- common/autotest_common.sh@10 -- # set +x 00:19:54.005 00:53:46 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.005 00:53:46 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.aACpVS6BpP 00:19:54.005 00:53:46 -- target/tls.sh@49 -- # local key=/tmp/tmp.aACpVS6BpP 00:19:54.005 00:53:46 -- target/tls.sh@51 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:54.005 [2024-04-27 00:53:46.586160] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.005 00:53:46 -- target/tls.sh@52 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:54.262 00:53:46 -- target/tls.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:54.262 [2024-04-27 00:53:46.854252] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.262 [2024-04-27 00:53:46.854487] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.262 00:53:46 -- target/tls.sh@55 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:54.521 malloc0 00:19:54.521 00:53:47 -- target/tls.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:54.521 00:53:47 -- target/tls.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.aACpVS6BpP 00:19:54.782 [2024-04-27 00:53:47.298643] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:54.782 00:53:47 -- target/tls.sh@220 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:54.782 00:53:47 -- target/tls.sh@222 -- # bdevperf_pid=2796570 00:19:54.782 00:53:47 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:54.782 00:53:47 -- target/tls.sh@225 -- # waitforlisten 2796570 /var/tmp/bdevperf.sock 00:19:54.782 00:53:47 -- common/autotest_common.sh@817 -- # '[' -z 2796570 ']' 00:19:54.782 00:53:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.782 00:53:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:54.782 00:53:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.782 00:53:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:54.782 00:53:47 -- common/autotest_common.sh@10 -- # set +x 00:19:54.782 [2024-04-27 00:53:47.401576] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:54.782 [2024-04-27 00:53:47.401719] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2796570 ] 00:19:55.042 EAL: No free 2048 kB hugepages reported on node 1 00:19:55.042 [2024-04-27 00:53:47.534349] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:55.042 [2024-04-27 00:53:47.626054] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.613 00:53:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:55.613 00:53:48 -- common/autotest_common.sh@850 -- # return 0 00:19:55.613 00:53:48 -- target/tls.sh@227 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aACpVS6BpP 00:19:55.873 00:53:48 -- target/tls.sh@228 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:55.873 [2024-04-27 00:53:48.463211] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.873 nvme0n1 00:19:55.873 00:53:48 -- target/tls.sh@232 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:56.133 Running I/O for 1 seconds... 00:19:57.072 00:19:57.072 Latency(us) 00:19:57.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.072 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:57.072 Verification LBA range: start 0x0 length 0x2000 00:19:57.072 nvme0n1 : 1.01 5104.82 19.94 0.00 0.00 24906.03 5035.92 55188.21 00:19:57.072 =================================================================================================================== 00:19:57.072 Total : 5104.82 19.94 0.00 0.00 24906.03 5035.92 55188.21 00:19:57.072 0 00:19:57.072 00:53:49 -- target/tls.sh@234 -- # killprocess 2796570 00:19:57.072 00:53:49 -- common/autotest_common.sh@936 -- # '[' -z 2796570 ']' 00:19:57.072 00:53:49 -- common/autotest_common.sh@940 -- # kill -0 2796570 00:19:57.072 00:53:49 -- common/autotest_common.sh@941 -- # uname 00:19:57.072 00:53:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.072 00:53:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796570 00:19:57.072 00:53:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:57.072 00:53:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:57.072 00:53:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796570' 00:19:57.072 killing process with pid 2796570 00:19:57.072 00:53:49 -- common/autotest_common.sh@955 -- # kill 2796570 00:19:57.072 Received shutdown signal, test time was about 1.000000 seconds 00:19:57.072 00:19:57.072 Latency(us) 00:19:57.072 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:57.072 =================================================================================================================== 00:19:57.072 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:57.072 00:53:49 -- common/autotest_common.sh@960 -- # wait 2796570 00:19:57.637 00:53:50 -- target/tls.sh@235 -- # killprocess 2796242 00:19:57.637 00:53:50 -- common/autotest_common.sh@936 -- # '[' -z 2796242 ']' 00:19:57.637 00:53:50 -- common/autotest_common.sh@940 -- # kill -0 2796242 00:19:57.637 00:53:50 -- common/autotest_common.sh@941 -- # uname 00:19:57.637 00:53:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:57.637 00:53:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2796242 00:19:57.637 00:53:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:57.637 00:53:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:57.638 00:53:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2796242' 00:19:57.638 killing process with pid 2796242 00:19:57.638 00:53:50 -- common/autotest_common.sh@955 -- # kill 2796242 00:19:57.638 [2024-04-27 00:53:50.130794] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:57.638 00:53:50 -- common/autotest_common.sh@960 -- # wait 2796242 00:19:58.204 00:53:50 -- target/tls.sh@238 -- # nvmfappstart 00:19:58.204 00:53:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:58.204 00:53:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:58.204 00:53:50 -- common/autotest_common.sh@10 -- # set +x 00:19:58.204 00:53:50 -- nvmf/common.sh@470 -- # nvmfpid=2797186 00:19:58.204 00:53:50 -- nvmf/common.sh@471 -- # waitforlisten 2797186 00:19:58.204 00:53:50 -- common/autotest_common.sh@817 -- # '[' -z 2797186 ']' 00:19:58.204 00:53:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.204 00:53:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.204 00:53:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.204 00:53:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.204 00:53:50 -- common/autotest_common.sh@10 -- # set +x 00:19:58.204 00:53:50 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:58.204 [2024-04-27 00:53:50.708245] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:58.204 [2024-04-27 00:53:50.708350] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.204 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.204 [2024-04-27 00:53:50.811719] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.463 [2024-04-27 00:53:50.902097] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.463 [2024-04-27 00:53:50.902144] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.463 [2024-04-27 00:53:50.902153] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:58.463 [2024-04-27 00:53:50.902162] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:58.463 [2024-04-27 00:53:50.902169] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.463 [2024-04-27 00:53:50.902198] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.721 00:53:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:58.721 00:53:51 -- common/autotest_common.sh@850 -- # return 0 00:19:58.721 00:53:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:58.721 00:53:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:58.721 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:19:58.981 00:53:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.981 00:53:51 -- target/tls.sh@239 -- # rpc_cmd 00:19:58.981 00:53:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:58.981 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:19:58.981 [2024-04-27 00:53:51.444883] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:58.981 malloc0 00:19:58.981 [2024-04-27 00:53:51.491026] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:58.981 [2024-04-27 00:53:51.491293] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.981 00:53:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:58.981 00:53:51 -- target/tls.sh@252 -- # bdevperf_pid=2797350 00:19:58.981 00:53:51 -- target/tls.sh@254 -- # waitforlisten 2797350 /var/tmp/bdevperf.sock 00:19:58.981 00:53:51 -- common/autotest_common.sh@817 -- # '[' -z 2797350 ']' 00:19:58.981 00:53:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.981 00:53:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:58.981 00:53:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.981 00:53:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:58.981 00:53:51 -- common/autotest_common.sh@10 -- # set +x 00:19:58.981 00:53:51 -- target/tls.sh@250 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:58.981 [2024-04-27 00:53:51.570388] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:19:58.981 [2024-04-27 00:53:51.570471] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2797350 ] 00:19:58.981 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.981 [2024-04-27 00:53:51.664820] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:59.241 [2024-04-27 00:53:51.755669] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:59.808 00:53:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:59.808 00:53:52 -- common/autotest_common.sh@850 -- # return 0 00:19:59.808 00:53:52 -- target/tls.sh@255 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.aACpVS6BpP 00:19:59.808 00:53:52 -- target/tls.sh@256 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:59.808 [2024-04-27 00:53:52.501543] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:00.066 nvme0n1 00:20:00.066 00:53:52 -- target/tls.sh@260 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:00.066 Running I/O for 1 seconds... 00:20:01.061 00:20:01.061 Latency(us) 00:20:01.061 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.061 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:01.061 Verification LBA range: start 0x0 length 0x2000 00:20:01.061 nvme0n1 : 1.02 5531.92 21.61 0.00 0.00 22962.58 4725.49 29111.78 00:20:01.061 =================================================================================================================== 00:20:01.061 Total : 5531.92 21.61 0.00 0.00 22962.58 4725.49 29111.78 00:20:01.061 0 00:20:01.061 00:53:53 -- target/tls.sh@263 -- # rpc_cmd save_config 00:20:01.061 00:53:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:01.061 00:53:53 -- common/autotest_common.sh@10 -- # set +x 00:20:01.320 00:53:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:01.320 00:53:53 -- target/tls.sh@263 -- # tgtcfg='{ 00:20:01.320 "subsystems": [ 00:20:01.320 { 00:20:01.320 "subsystem": "keyring", 00:20:01.320 "config": [ 00:20:01.320 { 00:20:01.320 "method": "keyring_file_add_key", 00:20:01.320 "params": { 00:20:01.320 "name": "key0", 00:20:01.320 "path": "/tmp/tmp.aACpVS6BpP" 00:20:01.320 } 00:20:01.320 } 00:20:01.320 ] 00:20:01.320 }, 00:20:01.320 { 00:20:01.320 "subsystem": "iobuf", 00:20:01.320 "config": [ 00:20:01.320 { 00:20:01.320 "method": "iobuf_set_options", 00:20:01.320 "params": { 00:20:01.320 "small_pool_count": 8192, 00:20:01.320 "large_pool_count": 1024, 00:20:01.320 "small_bufsize": 8192, 00:20:01.320 "large_bufsize": 135168 00:20:01.320 } 00:20:01.320 } 00:20:01.320 ] 00:20:01.320 }, 00:20:01.320 { 00:20:01.320 "subsystem": "sock", 00:20:01.320 "config": [ 00:20:01.320 { 00:20:01.320 "method": "sock_impl_set_options", 00:20:01.320 "params": { 00:20:01.320 "impl_name": "posix", 00:20:01.320 "recv_buf_size": 2097152, 00:20:01.320 "send_buf_size": 2097152, 00:20:01.320 "enable_recv_pipe": true, 00:20:01.320 "enable_quickack": false, 00:20:01.320 "enable_placement_id": 0, 00:20:01.320 "enable_zerocopy_send_server": true, 00:20:01.320 "enable_zerocopy_send_client": false, 00:20:01.320 "zerocopy_threshold": 0, 00:20:01.320 "tls_version": 0, 00:20:01.320 "enable_ktls": false 00:20:01.320 } 00:20:01.320 }, 00:20:01.320 { 00:20:01.320 "method": "sock_impl_set_options", 00:20:01.320 "params": { 00:20:01.320 "impl_name": "ssl", 00:20:01.320 "recv_buf_size": 4096, 00:20:01.320 "send_buf_size": 4096, 00:20:01.320 "enable_recv_pipe": true, 00:20:01.320 "enable_quickack": false, 00:20:01.320 "enable_placement_id": 0, 00:20:01.320 "enable_zerocopy_send_server": true, 00:20:01.320 "enable_zerocopy_send_client": false, 00:20:01.320 "zerocopy_threshold": 0, 00:20:01.320 "tls_version": 0, 00:20:01.320 "enable_ktls": false 00:20:01.320 } 00:20:01.320 } 00:20:01.320 ] 00:20:01.320 }, 00:20:01.320 { 00:20:01.320 "subsystem": "vmd", 00:20:01.320 "config": [] 00:20:01.320 }, 00:20:01.320 { 00:20:01.320 "subsystem": "accel", 00:20:01.320 "config": [ 00:20:01.320 { 00:20:01.320 "method": "accel_set_options", 00:20:01.320 "params": { 00:20:01.320 "small_cache_size": 128, 00:20:01.320 "large_cache_size": 16, 00:20:01.320 "task_count": 2048, 00:20:01.320 "sequence_count": 2048, 00:20:01.320 "buf_count": 2048 00:20:01.320 } 00:20:01.320 } 00:20:01.320 ] 00:20:01.320 }, 00:20:01.321 { 00:20:01.321 "subsystem": "bdev", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "bdev_set_options", 00:20:01.321 "params": { 00:20:01.321 "bdev_io_pool_size": 65535, 00:20:01.321 "bdev_io_cache_size": 256, 00:20:01.321 "bdev_auto_examine": true, 00:20:01.321 "iobuf_small_cache_size": 128, 00:20:01.321 "iobuf_large_cache_size": 16 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_raid_set_options", 00:20:01.321 "params": { 00:20:01.321 "process_window_size_kb": 1024 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_iscsi_set_options", 00:20:01.321 "params": { 00:20:01.321 "timeout_sec": 30 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_nvme_set_options", 00:20:01.321 "params": { 00:20:01.321 "action_on_timeout": "none", 00:20:01.321 "timeout_us": 0, 00:20:01.321 "timeout_admin_us": 0, 00:20:01.321 "keep_alive_timeout_ms": 10000, 00:20:01.321 "arbitration_burst": 0, 00:20:01.321 "low_priority_weight": 0, 00:20:01.321 "medium_priority_weight": 0, 00:20:01.321 "high_priority_weight": 0, 00:20:01.321 "nvme_adminq_poll_period_us": 10000, 00:20:01.321 "nvme_ioq_poll_period_us": 0, 00:20:01.321 "io_queue_requests": 0, 00:20:01.321 "delay_cmd_submit": true, 00:20:01.321 "transport_retry_count": 4, 00:20:01.321 "bdev_retry_count": 3, 00:20:01.321 "transport_ack_timeout": 0, 00:20:01.321 "ctrlr_loss_timeout_sec": 0, 00:20:01.321 "reconnect_delay_sec": 0, 00:20:01.321 "fast_io_fail_timeout_sec": 0, 00:20:01.321 "disable_auto_failback": false, 00:20:01.321 "generate_uuids": false, 00:20:01.321 "transport_tos": 0, 00:20:01.321 "nvme_error_stat": false, 00:20:01.321 "rdma_srq_size": 0, 00:20:01.321 "io_path_stat": false, 00:20:01.321 "allow_accel_sequence": false, 00:20:01.321 "rdma_max_cq_size": 0, 00:20:01.321 "rdma_cm_event_timeout_ms": 0, 00:20:01.321 "dhchap_digests": [ 00:20:01.321 "sha256", 00:20:01.321 "sha384", 00:20:01.321 "sha512" 00:20:01.321 ], 00:20:01.321 "dhchap_dhgroups": [ 00:20:01.321 "null", 00:20:01.321 "ffdhe2048", 00:20:01.321 "ffdhe3072", 00:20:01.321 "ffdhe4096", 00:20:01.321 "ffdhe6144", 00:20:01.321 "ffdhe8192" 00:20:01.321 ] 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_nvme_set_hotplug", 00:20:01.321 "params": { 00:20:01.321 "period_us": 100000, 00:20:01.321 "enable": false 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_malloc_create", 00:20:01.321 "params": { 00:20:01.321 "name": "malloc0", 00:20:01.321 "num_blocks": 8192, 00:20:01.321 "block_size": 4096, 00:20:01.321 "physical_block_size": 4096, 00:20:01.321 "uuid": "6a3c76a8-7bad-415c-896f-72d8ed3a89b4", 00:20:01.321 "optimal_io_boundary": 0 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "bdev_wait_for_examine" 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "nbd", 00:20:01.321 "config": [] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "scheduler", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "framework_set_scheduler", 00:20:01.321 "params": { 00:20:01.321 "name": "static" 00:20:01.321 } 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "nvmf", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "nvmf_set_config", 00:20:01.321 "params": { 00:20:01.321 "discovery_filter": "match_any", 00:20:01.321 "admin_cmd_passthru": { 00:20:01.321 "identify_ctrlr": false 00:20:01.321 } 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_set_max_subsystems", 00:20:01.321 "params": { 00:20:01.321 "max_subsystems": 1024 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_set_crdt", 00:20:01.321 "params": { 00:20:01.321 "crdt1": 0, 00:20:01.321 "crdt2": 0, 00:20:01.321 "crdt3": 0 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_create_transport", 00:20:01.321 "params": { 00:20:01.321 "trtype": "TCP", 00:20:01.321 "max_queue_depth": 128, 00:20:01.321 "max_io_qpairs_per_ctrlr": 127, 00:20:01.321 "in_capsule_data_size": 4096, 00:20:01.321 "max_io_size": 131072, 00:20:01.321 "io_unit_size": 131072, 00:20:01.321 "max_aq_depth": 128, 00:20:01.321 "num_shared_buffers": 511, 00:20:01.321 "buf_cache_size": 4294967295, 00:20:01.321 "dif_insert_or_strip": false, 00:20:01.321 "zcopy": false, 00:20:01.321 "c2h_success": false, 00:20:01.321 "sock_priority": 0, 00:20:01.321 "abort_timeout_sec": 1, 00:20:01.321 "ack_timeout": 0, 00:20:01.321 "data_wr_pool_size": 0 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_create_subsystem", 00:20:01.321 "params": { 00:20:01.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.321 "allow_any_host": false, 00:20:01.321 "serial_number": "00000000000000000000", 00:20:01.321 "model_number": "SPDK bdev Controller", 00:20:01.321 "max_namespaces": 32, 00:20:01.321 "min_cntlid": 1, 00:20:01.321 "max_cntlid": 65519, 00:20:01.321 "ana_reporting": false 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_subsystem_add_host", 00:20:01.321 "params": { 00:20:01.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.321 "host": "nqn.2016-06.io.spdk:host1", 00:20:01.321 "psk": "key0" 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_subsystem_add_ns", 00:20:01.321 "params": { 00:20:01.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.321 "namespace": { 00:20:01.321 "nsid": 1, 00:20:01.321 "bdev_name": "malloc0", 00:20:01.321 "nguid": "6A3C76A87BAD415C896F72D8ED3A89B4", 00:20:01.321 "uuid": "6a3c76a8-7bad-415c-896f-72d8ed3a89b4", 00:20:01.321 "no_auto_visible": false 00:20:01.321 } 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "nvmf_subsystem_add_listener", 00:20:01.321 "params": { 00:20:01.321 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.321 "listen_address": { 00:20:01.321 "trtype": "TCP", 00:20:01.321 "adrfam": "IPv4", 00:20:01.321 "traddr": "10.0.0.2", 00:20:01.321 "trsvcid": "4420" 00:20:01.321 }, 00:20:01.321 "secure_channel": true 00:20:01.321 } 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }' 00:20:01.321 00:53:53 -- target/tls.sh@264 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:20:01.321 00:53:53 -- target/tls.sh@264 -- # bperfcfg='{ 00:20:01.321 "subsystems": [ 00:20:01.321 { 00:20:01.321 "subsystem": "keyring", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "keyring_file_add_key", 00:20:01.321 "params": { 00:20:01.321 "name": "key0", 00:20:01.321 "path": "/tmp/tmp.aACpVS6BpP" 00:20:01.321 } 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "iobuf", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "iobuf_set_options", 00:20:01.321 "params": { 00:20:01.321 "small_pool_count": 8192, 00:20:01.321 "large_pool_count": 1024, 00:20:01.321 "small_bufsize": 8192, 00:20:01.321 "large_bufsize": 135168 00:20:01.321 } 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "sock", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "sock_impl_set_options", 00:20:01.321 "params": { 00:20:01.321 "impl_name": "posix", 00:20:01.321 "recv_buf_size": 2097152, 00:20:01.321 "send_buf_size": 2097152, 00:20:01.321 "enable_recv_pipe": true, 00:20:01.321 "enable_quickack": false, 00:20:01.321 "enable_placement_id": 0, 00:20:01.321 "enable_zerocopy_send_server": true, 00:20:01.321 "enable_zerocopy_send_client": false, 00:20:01.321 "zerocopy_threshold": 0, 00:20:01.321 "tls_version": 0, 00:20:01.321 "enable_ktls": false 00:20:01.321 } 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "method": "sock_impl_set_options", 00:20:01.321 "params": { 00:20:01.321 "impl_name": "ssl", 00:20:01.321 "recv_buf_size": 4096, 00:20:01.321 "send_buf_size": 4096, 00:20:01.321 "enable_recv_pipe": true, 00:20:01.321 "enable_quickack": false, 00:20:01.321 "enable_placement_id": 0, 00:20:01.321 "enable_zerocopy_send_server": true, 00:20:01.321 "enable_zerocopy_send_client": false, 00:20:01.321 "zerocopy_threshold": 0, 00:20:01.321 "tls_version": 0, 00:20:01.321 "enable_ktls": false 00:20:01.321 } 00:20:01.321 } 00:20:01.321 ] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "vmd", 00:20:01.321 "config": [] 00:20:01.321 }, 00:20:01.321 { 00:20:01.321 "subsystem": "accel", 00:20:01.321 "config": [ 00:20:01.321 { 00:20:01.321 "method": "accel_set_options", 00:20:01.321 "params": { 00:20:01.322 "small_cache_size": 128, 00:20:01.322 "large_cache_size": 16, 00:20:01.322 "task_count": 2048, 00:20:01.322 "sequence_count": 2048, 00:20:01.322 "buf_count": 2048 00:20:01.322 } 00:20:01.322 } 00:20:01.322 ] 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "subsystem": "bdev", 00:20:01.322 "config": [ 00:20:01.322 { 00:20:01.322 "method": "bdev_set_options", 00:20:01.322 "params": { 00:20:01.322 "bdev_io_pool_size": 65535, 00:20:01.322 "bdev_io_cache_size": 256, 00:20:01.322 "bdev_auto_examine": true, 00:20:01.322 "iobuf_small_cache_size": 128, 00:20:01.322 "iobuf_large_cache_size": 16 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_raid_set_options", 00:20:01.322 "params": { 00:20:01.322 "process_window_size_kb": 1024 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_iscsi_set_options", 00:20:01.322 "params": { 00:20:01.322 "timeout_sec": 30 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_nvme_set_options", 00:20:01.322 "params": { 00:20:01.322 "action_on_timeout": "none", 00:20:01.322 "timeout_us": 0, 00:20:01.322 "timeout_admin_us": 0, 00:20:01.322 "keep_alive_timeout_ms": 10000, 00:20:01.322 "arbitration_burst": 0, 00:20:01.322 "low_priority_weight": 0, 00:20:01.322 "medium_priority_weight": 0, 00:20:01.322 "high_priority_weight": 0, 00:20:01.322 "nvme_adminq_poll_period_us": 10000, 00:20:01.322 "nvme_ioq_poll_period_us": 0, 00:20:01.322 "io_queue_requests": 512, 00:20:01.322 "delay_cmd_submit": true, 00:20:01.322 "transport_retry_count": 4, 00:20:01.322 "bdev_retry_count": 3, 00:20:01.322 "transport_ack_timeout": 0, 00:20:01.322 "ctrlr_loss_timeout_sec": 0, 00:20:01.322 "reconnect_delay_sec": 0, 00:20:01.322 "fast_io_fail_timeout_sec": 0, 00:20:01.322 "disable_auto_failback": false, 00:20:01.322 "generate_uuids": false, 00:20:01.322 "transport_tos": 0, 00:20:01.322 "nvme_error_stat": false, 00:20:01.322 "rdma_srq_size": 0, 00:20:01.322 "io_path_stat": false, 00:20:01.322 "allow_accel_sequence": false, 00:20:01.322 "rdma_max_cq_size": 0, 00:20:01.322 "rdma_cm_event_timeout_ms": 0, 00:20:01.322 "dhchap_digests": [ 00:20:01.322 "sha256", 00:20:01.322 "sha384", 00:20:01.322 "sha512" 00:20:01.322 ], 00:20:01.322 "dhchap_dhgroups": [ 00:20:01.322 "null", 00:20:01.322 "ffdhe2048", 00:20:01.322 "ffdhe3072", 00:20:01.322 "ffdhe4096", 00:20:01.322 "ffdhe6144", 00:20:01.322 "ffdhe8192" 00:20:01.322 ] 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_nvme_attach_controller", 00:20:01.322 "params": { 00:20:01.322 "name": "nvme0", 00:20:01.322 "trtype": "TCP", 00:20:01.322 "adrfam": "IPv4", 00:20:01.322 "traddr": "10.0.0.2", 00:20:01.322 "trsvcid": "4420", 00:20:01.322 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:01.322 "prchk_reftag": false, 00:20:01.322 "prchk_guard": false, 00:20:01.322 "ctrlr_loss_timeout_sec": 0, 00:20:01.322 "reconnect_delay_sec": 0, 00:20:01.322 "fast_io_fail_timeout_sec": 0, 00:20:01.322 "psk": "key0", 00:20:01.322 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:01.322 "hdgst": false, 00:20:01.322 "ddgst": false 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_nvme_set_hotplug", 00:20:01.322 "params": { 00:20:01.322 "period_us": 100000, 00:20:01.322 "enable": false 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_enable_histogram", 00:20:01.322 "params": { 00:20:01.322 "name": "nvme0n1", 00:20:01.322 "enable": true 00:20:01.322 } 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "method": "bdev_wait_for_examine" 00:20:01.322 } 00:20:01.322 ] 00:20:01.322 }, 00:20:01.322 { 00:20:01.322 "subsystem": "nbd", 00:20:01.322 "config": [] 00:20:01.322 } 00:20:01.322 ] 00:20:01.322 }' 00:20:01.322 00:53:53 -- target/tls.sh@266 -- # killprocess 2797350 00:20:01.322 00:53:53 -- common/autotest_common.sh@936 -- # '[' -z 2797350 ']' 00:20:01.322 00:53:53 -- common/autotest_common.sh@940 -- # kill -0 2797350 00:20:01.322 00:53:53 -- common/autotest_common.sh@941 -- # uname 00:20:01.322 00:53:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.322 00:53:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2797350 00:20:01.322 00:53:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:01.322 00:53:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:01.322 00:53:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2797350' 00:20:01.322 killing process with pid 2797350 00:20:01.322 00:53:54 -- common/autotest_common.sh@955 -- # kill 2797350 00:20:01.322 Received shutdown signal, test time was about 1.000000 seconds 00:20:01.322 00:20:01.322 Latency(us) 00:20:01.322 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.322 =================================================================================================================== 00:20:01.322 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:01.322 00:53:54 -- common/autotest_common.sh@960 -- # wait 2797350 00:20:01.888 00:53:54 -- target/tls.sh@267 -- # killprocess 2797186 00:20:01.888 00:53:54 -- common/autotest_common.sh@936 -- # '[' -z 2797186 ']' 00:20:01.888 00:53:54 -- common/autotest_common.sh@940 -- # kill -0 2797186 00:20:01.888 00:53:54 -- common/autotest_common.sh@941 -- # uname 00:20:01.888 00:53:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:01.888 00:53:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2797186 00:20:01.888 00:53:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:01.888 00:53:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:01.888 00:53:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2797186' 00:20:01.888 killing process with pid 2797186 00:20:01.888 00:53:54 -- common/autotest_common.sh@955 -- # kill 2797186 00:20:01.888 00:53:54 -- common/autotest_common.sh@960 -- # wait 2797186 00:20:02.458 00:53:54 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:20:02.458 00:53:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:02.458 00:53:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:02.458 00:53:54 -- common/autotest_common.sh@10 -- # set +x 00:20:02.458 00:53:54 -- target/tls.sh@269 -- # echo '{ 00:20:02.458 "subsystems": [ 00:20:02.458 { 00:20:02.458 "subsystem": "keyring", 00:20:02.458 "config": [ 00:20:02.458 { 00:20:02.458 "method": "keyring_file_add_key", 00:20:02.458 "params": { 00:20:02.458 "name": "key0", 00:20:02.458 "path": "/tmp/tmp.aACpVS6BpP" 00:20:02.458 } 00:20:02.458 } 00:20:02.458 ] 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "subsystem": "iobuf", 00:20:02.458 "config": [ 00:20:02.458 { 00:20:02.458 "method": "iobuf_set_options", 00:20:02.458 "params": { 00:20:02.458 "small_pool_count": 8192, 00:20:02.458 "large_pool_count": 1024, 00:20:02.458 "small_bufsize": 8192, 00:20:02.458 "large_bufsize": 135168 00:20:02.458 } 00:20:02.458 } 00:20:02.458 ] 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "subsystem": "sock", 00:20:02.458 "config": [ 00:20:02.458 { 00:20:02.458 "method": "sock_impl_set_options", 00:20:02.458 "params": { 00:20:02.458 "impl_name": "posix", 00:20:02.458 "recv_buf_size": 2097152, 00:20:02.458 "send_buf_size": 2097152, 00:20:02.458 "enable_recv_pipe": true, 00:20:02.458 "enable_quickack": false, 00:20:02.458 "enable_placement_id": 0, 00:20:02.458 "enable_zerocopy_send_server": true, 00:20:02.458 "enable_zerocopy_send_client": false, 00:20:02.458 "zerocopy_threshold": 0, 00:20:02.458 "tls_version": 0, 00:20:02.458 "enable_ktls": false 00:20:02.458 } 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "method": "sock_impl_set_options", 00:20:02.458 "params": { 00:20:02.458 "impl_name": "ssl", 00:20:02.458 "recv_buf_size": 4096, 00:20:02.458 "send_buf_size": 4096, 00:20:02.458 "enable_recv_pipe": true, 00:20:02.458 "enable_quickack": false, 00:20:02.458 "enable_placement_id": 0, 00:20:02.458 "enable_zerocopy_send_server": true, 00:20:02.458 "enable_zerocopy_send_client": false, 00:20:02.458 "zerocopy_threshold": 0, 00:20:02.458 "tls_version": 0, 00:20:02.458 "enable_ktls": false 00:20:02.458 } 00:20:02.458 } 00:20:02.458 ] 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "subsystem": "vmd", 00:20:02.458 "config": [] 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "subsystem": "accel", 00:20:02.458 "config": [ 00:20:02.458 { 00:20:02.458 "method": "accel_set_options", 00:20:02.458 "params": { 00:20:02.458 "small_cache_size": 128, 00:20:02.458 "large_cache_size": 16, 00:20:02.458 "task_count": 2048, 00:20:02.458 "sequence_count": 2048, 00:20:02.458 "buf_count": 2048 00:20:02.458 } 00:20:02.458 } 00:20:02.458 ] 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "subsystem": "bdev", 00:20:02.458 "config": [ 00:20:02.458 { 00:20:02.458 "method": "bdev_set_options", 00:20:02.458 "params": { 00:20:02.458 "bdev_io_pool_size": 65535, 00:20:02.458 "bdev_io_cache_size": 256, 00:20:02.458 "bdev_auto_examine": true, 00:20:02.458 "iobuf_small_cache_size": 128, 00:20:02.458 "iobuf_large_cache_size": 16 00:20:02.458 } 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "method": "bdev_raid_set_options", 00:20:02.458 "params": { 00:20:02.458 "process_window_size_kb": 1024 00:20:02.458 } 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "method": "bdev_iscsi_set_options", 00:20:02.458 "params": { 00:20:02.458 "timeout_sec": 30 00:20:02.458 } 00:20:02.458 }, 00:20:02.458 { 00:20:02.458 "method": "bdev_nvme_set_options", 00:20:02.458 "params": { 00:20:02.458 "action_on_timeout": "none", 00:20:02.458 "timeout_us": 0, 00:20:02.458 "timeout_admin_us": 0, 00:20:02.458 "keep_alive_timeout_ms": 10000, 00:20:02.458 "arbitration_burst": 0, 00:20:02.458 "low_priority_weight": 0, 00:20:02.458 "medium_priority_weight": 0, 00:20:02.458 "high_priority_weight": 0, 00:20:02.458 "nvme_adminq_poll_period_us": 10000, 00:20:02.458 "nvme_ioq_poll_period_us": 0, 00:20:02.458 "io_queue_requests": 0, 00:20:02.458 "delay_cmd_submit": true, 00:20:02.458 "transport_retry_count": 4, 00:20:02.458 "bdev_retry_count": 3, 00:20:02.458 "transport_ack_timeout": 0, 00:20:02.458 "ctrlr_loss_timeout_sec": 0, 00:20:02.458 "reconnect_delay_sec": 0, 00:20:02.458 "fast_io_fail_timeout_sec": 0, 00:20:02.458 "disable_auto_failback": false, 00:20:02.458 "generate_uuids": false, 00:20:02.458 "transport_tos": 0, 00:20:02.458 "nvme_error_stat": false, 00:20:02.458 "rdma_srq_size": 0, 00:20:02.458 "io_path_stat": false, 00:20:02.458 "allow_accel_sequence": false, 00:20:02.458 "rdma_max_cq_size": 0, 00:20:02.458 "rdma_cm_event_timeout_ms": 0, 00:20:02.458 "dhchap_digests": [ 00:20:02.458 "sha256", 00:20:02.458 "sha384", 00:20:02.458 "sha512" 00:20:02.458 ], 00:20:02.458 "dhchap_dhgroups": [ 00:20:02.458 "null", 00:20:02.459 "ffdhe2048", 00:20:02.459 "ffdhe3072", 00:20:02.459 "ffdhe4096", 00:20:02.459 "ffdhe6144", 00:20:02.459 "ffdhe8192" 00:20:02.459 ] 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "bdev_nvme_set_hotplug", 00:20:02.459 "params": { 00:20:02.459 "period_us": 100000, 00:20:02.459 "enable": false 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "bdev_malloc_create", 00:20:02.459 "params": { 00:20:02.459 "name": "malloc0", 00:20:02.459 "num_blocks": 8192, 00:20:02.459 "block_size": 4096, 00:20:02.459 "physical_block_size": 4096, 00:20:02.459 "uuid": "6a3c76a8-7bad-415c-896f-72d8ed3a89b4", 00:20:02.459 "optimal_io_boundary": 0 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "bdev_wait_for_examine" 00:20:02.459 } 00:20:02.459 ] 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "subsystem": "nbd", 00:20:02.459 "config": [] 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "subsystem": "scheduler", 00:20:02.459 "config": [ 00:20:02.459 { 00:20:02.459 "method": "framework_set_scheduler", 00:20:02.459 "params": { 00:20:02.459 "name": "static" 00:20:02.459 } 00:20:02.459 } 00:20:02.459 ] 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "subsystem": "nvmf", 00:20:02.459 "config": [ 00:20:02.459 { 00:20:02.459 "method": "nvmf_set_config", 00:20:02.459 "params": { 00:20:02.459 "discovery_filter": "match_any", 00:20:02.459 "admin_cmd_passthru": { 00:20:02.459 "identify_ctrlr": false 00:20:02.459 } 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_set_max_subsystems", 00:20:02.459 "params": { 00:20:02.459 "max_subsystems": 1024 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_set_crdt", 00:20:02.459 "params": { 00:20:02.459 "crdt1": 0, 00:20:02.459 "crdt2": 0, 00:20:02.459 "crdt3": 0 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_create_transport", 00:20:02.459 "params": { 00:20:02.459 "trtype": "TCP", 00:20:02.459 "max_queue_depth": 128, 00:20:02.459 "max_io_qpairs_per_ctrlr": 127, 00:20:02.459 "in_capsule_data_size": 4096, 00:20:02.459 "max_io_size": 131072, 00:20:02.459 "io_unit_size": 131072, 00:20:02.459 "max_aq_depth": 128, 00:20:02.459 "num_shared_buffers": 511, 00:20:02.459 "buf_cache_size": 4294967295, 00:20:02.459 "dif_insert_or_strip": false, 00:20:02.459 "zcopy": false, 00:20:02.459 "c2h_success": false, 00:20:02.459 "sock_priority": 0, 00:20:02.459 "abort_timeout_sec": 1, 00:20:02.459 "ack_timeout": 0, 00:20:02.459 "data_wr_pool_size": 0 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_create_subsystem", 00:20:02.459 "params": { 00:20:02.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.459 "allow_any_host": false, 00:20:02.459 "serial_number": "00000000000000000000", 00:20:02.459 "model_number": "SPDK bdev Controller", 00:20:02.459 "max_namespaces": 32, 00:20:02.459 "min_cntlid": 1, 00:20:02.459 "max_cntlid": 65519, 00:20:02.459 "ana_reporting": false 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_subsystem_add_host", 00:20:02.459 "params": { 00:20:02.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.459 "host": "nqn.2016-06.io.spdk:host1", 00:20:02.459 "psk": "key0" 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_subsystem_add_ns", 00:20:02.459 "params": { 00:20:02.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.459 "namespace": { 00:20:02.459 "nsid": 1, 00:20:02.459 "bdev_name": "malloc0", 00:20:02.459 "nguid": "6A3C76A87BAD415C896F72D8ED3A89B4", 00:20:02.459 "uuid": "6a3c76a8-7bad-415c-896f-72d8ed3a89b4", 00:20:02.459 "no_auto_visible": false 00:20:02.459 } 00:20:02.459 } 00:20:02.459 }, 00:20:02.459 { 00:20:02.459 "method": "nvmf_subsystem_add_listener", 00:20:02.459 "params": { 00:20:02.459 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:02.459 "listen_address": { 00:20:02.459 "trtype": "TCP", 00:20:02.459 "adrfam": "IPv4", 00:20:02.459 "traddr": "10.0.0.2", 00:20:02.459 "trsvcid": "4420" 00:20:02.459 }, 00:20:02.459 "secure_channel": true 00:20:02.459 } 00:20:02.459 } 00:20:02.459 ] 00:20:02.459 } 00:20:02.459 ] 00:20:02.459 }' 00:20:02.459 00:53:54 -- nvmf/common.sh@470 -- # nvmfpid=2798124 00:20:02.459 00:53:54 -- nvmf/common.sh@471 -- # waitforlisten 2798124 00:20:02.459 00:53:54 -- common/autotest_common.sh@817 -- # '[' -z 2798124 ']' 00:20:02.459 00:53:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:02.459 00:53:54 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:20:02.459 00:53:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:02.459 00:53:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:02.459 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:02.459 00:53:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:02.459 00:53:54 -- common/autotest_common.sh@10 -- # set +x 00:20:02.459 [2024-04-27 00:53:54.981388] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:02.459 [2024-04-27 00:53:54.981498] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:02.459 EAL: No free 2048 kB hugepages reported on node 1 00:20:02.459 [2024-04-27 00:53:55.104175] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.719 [2024-04-27 00:53:55.194170] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:02.719 [2024-04-27 00:53:55.194207] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:02.719 [2024-04-27 00:53:55.194217] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:02.719 [2024-04-27 00:53:55.194230] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:02.719 [2024-04-27 00:53:55.194237] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:02.719 [2024-04-27 00:53:55.194316] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.978 [2024-04-27 00:53:55.483404] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:02.978 [2024-04-27 00:53:55.515369] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:02.978 [2024-04-27 00:53:55.515614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:03.238 00:53:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:03.238 00:53:55 -- common/autotest_common.sh@850 -- # return 0 00:20:03.238 00:53:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:03.238 00:53:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:03.238 00:53:55 -- common/autotest_common.sh@10 -- # set +x 00:20:03.238 00:53:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:03.238 00:53:55 -- target/tls.sh@272 -- # bdevperf_pid=2798145 00:20:03.238 00:53:55 -- target/tls.sh@273 -- # waitforlisten 2798145 /var/tmp/bdevperf.sock 00:20:03.238 00:53:55 -- common/autotest_common.sh@817 -- # '[' -z 2798145 ']' 00:20:03.238 00:53:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:03.238 00:53:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.238 00:53:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:03.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:03.238 00:53:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.238 00:53:55 -- common/autotest_common.sh@10 -- # set +x 00:20:03.238 00:53:55 -- target/tls.sh@270 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:20:03.238 00:53:55 -- target/tls.sh@270 -- # echo '{ 00:20:03.238 "subsystems": [ 00:20:03.238 { 00:20:03.238 "subsystem": "keyring", 00:20:03.238 "config": [ 00:20:03.238 { 00:20:03.238 "method": "keyring_file_add_key", 00:20:03.238 "params": { 00:20:03.238 "name": "key0", 00:20:03.238 "path": "/tmp/tmp.aACpVS6BpP" 00:20:03.238 } 00:20:03.238 } 00:20:03.238 ] 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "subsystem": "iobuf", 00:20:03.238 "config": [ 00:20:03.238 { 00:20:03.238 "method": "iobuf_set_options", 00:20:03.238 "params": { 00:20:03.238 "small_pool_count": 8192, 00:20:03.238 "large_pool_count": 1024, 00:20:03.238 "small_bufsize": 8192, 00:20:03.238 "large_bufsize": 135168 00:20:03.238 } 00:20:03.238 } 00:20:03.238 ] 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "subsystem": "sock", 00:20:03.238 "config": [ 00:20:03.238 { 00:20:03.238 "method": "sock_impl_set_options", 00:20:03.238 "params": { 00:20:03.238 "impl_name": "posix", 00:20:03.238 "recv_buf_size": 2097152, 00:20:03.238 "send_buf_size": 2097152, 00:20:03.238 "enable_recv_pipe": true, 00:20:03.238 "enable_quickack": false, 00:20:03.238 "enable_placement_id": 0, 00:20:03.238 "enable_zerocopy_send_server": true, 00:20:03.238 "enable_zerocopy_send_client": false, 00:20:03.238 "zerocopy_threshold": 0, 00:20:03.238 "tls_version": 0, 00:20:03.238 "enable_ktls": false 00:20:03.238 } 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "method": "sock_impl_set_options", 00:20:03.238 "params": { 00:20:03.238 "impl_name": "ssl", 00:20:03.238 "recv_buf_size": 4096, 00:20:03.238 "send_buf_size": 4096, 00:20:03.238 "enable_recv_pipe": true, 00:20:03.238 "enable_quickack": false, 00:20:03.238 "enable_placement_id": 0, 00:20:03.238 "enable_zerocopy_send_server": true, 00:20:03.238 "enable_zerocopy_send_client": false, 00:20:03.238 "zerocopy_threshold": 0, 00:20:03.238 "tls_version": 0, 00:20:03.238 "enable_ktls": false 00:20:03.238 } 00:20:03.238 } 00:20:03.238 ] 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "subsystem": "vmd", 00:20:03.238 "config": [] 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "subsystem": "accel", 00:20:03.238 "config": [ 00:20:03.238 { 00:20:03.238 "method": "accel_set_options", 00:20:03.238 "params": { 00:20:03.238 "small_cache_size": 128, 00:20:03.238 "large_cache_size": 16, 00:20:03.238 "task_count": 2048, 00:20:03.238 "sequence_count": 2048, 00:20:03.238 "buf_count": 2048 00:20:03.238 } 00:20:03.238 } 00:20:03.238 ] 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "subsystem": "bdev", 00:20:03.238 "config": [ 00:20:03.238 { 00:20:03.238 "method": "bdev_set_options", 00:20:03.238 "params": { 00:20:03.238 "bdev_io_pool_size": 65535, 00:20:03.238 "bdev_io_cache_size": 256, 00:20:03.238 "bdev_auto_examine": true, 00:20:03.238 "iobuf_small_cache_size": 128, 00:20:03.238 "iobuf_large_cache_size": 16 00:20:03.238 } 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "method": "bdev_raid_set_options", 00:20:03.238 "params": { 00:20:03.238 "process_window_size_kb": 1024 00:20:03.238 } 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "method": "bdev_iscsi_set_options", 00:20:03.238 "params": { 00:20:03.238 "timeout_sec": 30 00:20:03.238 } 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "method": "bdev_nvme_set_options", 00:20:03.238 "params": { 00:20:03.238 "action_on_timeout": "none", 00:20:03.238 "timeout_us": 0, 00:20:03.238 "timeout_admin_us": 0, 00:20:03.238 "keep_alive_timeout_ms": 10000, 00:20:03.238 "arbitration_burst": 0, 00:20:03.238 "low_priority_weight": 0, 00:20:03.238 "medium_priority_weight": 0, 00:20:03.238 "high_priority_weight": 0, 00:20:03.238 "nvme_adminq_poll_period_us": 10000, 00:20:03.238 "nvme_ioq_poll_period_us": 0, 00:20:03.238 "io_queue_requests": 512, 00:20:03.238 "delay_cmd_submit": true, 00:20:03.238 "transport_retry_count": 4, 00:20:03.238 "bdev_retry_count": 3, 00:20:03.238 "transport_ack_timeout": 0, 00:20:03.238 "ctrlr_loss_timeout_sec": 0, 00:20:03.238 "reconnect_delay_sec": 0, 00:20:03.238 "fast_io_fail_timeout_sec": 0, 00:20:03.238 "disable_auto_failback": false, 00:20:03.238 "generate_uuids": false, 00:20:03.238 "transport_tos": 0, 00:20:03.238 "nvme_error_stat": false, 00:20:03.238 "rdma_srq_size": 0, 00:20:03.238 "io_path_stat": false, 00:20:03.238 "allow_accel_sequence": false, 00:20:03.238 "rdma_max_cq_size": 0, 00:20:03.238 "rdma_cm_event_timeout_ms": 0, 00:20:03.238 "dhchap_digests": [ 00:20:03.238 "sha256", 00:20:03.238 "sha384", 00:20:03.238 "sha512" 00:20:03.238 ], 00:20:03.238 "dhchap_dhgroups": [ 00:20:03.238 "null", 00:20:03.238 "ffdhe2048", 00:20:03.238 "ffdhe3072", 00:20:03.238 "ffdhe4096", 00:20:03.238 "ffdhe6144", 00:20:03.238 "ffdhe8192" 00:20:03.238 ] 00:20:03.238 } 00:20:03.238 }, 00:20:03.238 { 00:20:03.238 "method": "bdev_nvme_attach_controller", 00:20:03.238 "params": { 00:20:03.238 "name": "nvme0", 00:20:03.238 "trtype": "TCP", 00:20:03.238 "adrfam": "IPv4", 00:20:03.238 "traddr": "10.0.0.2", 00:20:03.238 "trsvcid": "4420", 00:20:03.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:03.238 "prchk_reftag": false, 00:20:03.238 "prchk_guard": false, 00:20:03.238 "ctrlr_loss_timeout_sec": 0, 00:20:03.238 "reconnect_delay_sec": 0, 00:20:03.238 "fast_io_fail_timeout_sec": 0, 00:20:03.238 "psk": "key0", 00:20:03.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:03.238 "hdgst": false, 00:20:03.239 "ddgst": false 00:20:03.239 } 00:20:03.239 }, 00:20:03.239 { 00:20:03.239 "method": "bdev_nvme_set_hotplug", 00:20:03.239 "params": { 00:20:03.239 "period_us": 100000, 00:20:03.239 "enable": false 00:20:03.239 } 00:20:03.239 }, 00:20:03.239 { 00:20:03.239 "method": "bdev_enable_histogram", 00:20:03.239 "params": { 00:20:03.239 "name": "nvme0n1", 00:20:03.239 "enable": true 00:20:03.239 } 00:20:03.239 }, 00:20:03.239 { 00:20:03.239 "method": "bdev_wait_for_examine" 00:20:03.239 } 00:20:03.239 ] 00:20:03.239 }, 00:20:03.239 { 00:20:03.239 "subsystem": "nbd", 00:20:03.239 "config": [] 00:20:03.239 } 00:20:03.239 ] 00:20:03.239 }' 00:20:03.239 [2024-04-27 00:53:55.807651] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:03.239 [2024-04-27 00:53:55.807802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798145 ] 00:20:03.239 EAL: No free 2048 kB hugepages reported on node 1 00:20:03.497 [2024-04-27 00:53:55.943630] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:03.497 [2024-04-27 00:53:56.034211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.758 [2024-04-27 00:53:56.241527] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:04.018 00:53:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.018 00:53:56 -- common/autotest_common.sh@850 -- # return 0 00:20:04.018 00:53:56 -- target/tls.sh@275 -- # jq -r '.[].name' 00:20:04.018 00:53:56 -- target/tls.sh@275 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:04.018 00:53:56 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.018 00:53:56 -- target/tls.sh@276 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:04.277 Running I/O for 1 seconds... 00:20:05.217 00:20:05.217 Latency(us) 00:20:05.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.217 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:05.217 Verification LBA range: start 0x0 length 0x2000 00:20:05.217 nvme0n1 : 1.02 5579.87 21.80 0.00 0.00 22743.00 5346.36 25386.58 00:20:05.217 =================================================================================================================== 00:20:05.217 Total : 5579.87 21.80 0.00 0.00 22743.00 5346.36 25386.58 00:20:05.217 0 00:20:05.217 00:53:57 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:20:05.217 00:53:57 -- target/tls.sh@279 -- # cleanup 00:20:05.217 00:53:57 -- target/tls.sh@15 -- # process_shm --id 0 00:20:05.217 00:53:57 -- common/autotest_common.sh@794 -- # type=--id 00:20:05.217 00:53:57 -- common/autotest_common.sh@795 -- # id=0 00:20:05.217 00:53:57 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:05.217 00:53:57 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:05.217 00:53:57 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:05.217 00:53:57 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:05.217 00:53:57 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:05.217 00:53:57 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:05.217 nvmf_trace.0 00:20:05.217 00:53:57 -- common/autotest_common.sh@809 -- # return 0 00:20:05.217 00:53:57 -- target/tls.sh@16 -- # killprocess 2798145 00:20:05.217 00:53:57 -- common/autotest_common.sh@936 -- # '[' -z 2798145 ']' 00:20:05.217 00:53:57 -- common/autotest_common.sh@940 -- # kill -0 2798145 00:20:05.217 00:53:57 -- common/autotest_common.sh@941 -- # uname 00:20:05.217 00:53:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.217 00:53:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2798145 00:20:05.217 00:53:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:05.217 00:53:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:05.217 00:53:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2798145' 00:20:05.217 killing process with pid 2798145 00:20:05.217 00:53:57 -- common/autotest_common.sh@955 -- # kill 2798145 00:20:05.217 Received shutdown signal, test time was about 1.000000 seconds 00:20:05.217 00:20:05.217 Latency(us) 00:20:05.217 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:05.217 =================================================================================================================== 00:20:05.217 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:05.217 00:53:57 -- common/autotest_common.sh@960 -- # wait 2798145 00:20:05.787 00:53:58 -- target/tls.sh@17 -- # nvmftestfini 00:20:05.787 00:53:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:05.787 00:53:58 -- nvmf/common.sh@117 -- # sync 00:20:05.787 00:53:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.787 00:53:58 -- nvmf/common.sh@120 -- # set +e 00:20:05.787 00:53:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.787 00:53:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.787 rmmod nvme_tcp 00:20:05.787 rmmod nvme_fabrics 00:20:05.787 rmmod nvme_keyring 00:20:05.787 00:53:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.787 00:53:58 -- nvmf/common.sh@124 -- # set -e 00:20:05.787 00:53:58 -- nvmf/common.sh@125 -- # return 0 00:20:05.787 00:53:58 -- nvmf/common.sh@478 -- # '[' -n 2798124 ']' 00:20:05.787 00:53:58 -- nvmf/common.sh@479 -- # killprocess 2798124 00:20:05.787 00:53:58 -- common/autotest_common.sh@936 -- # '[' -z 2798124 ']' 00:20:05.787 00:53:58 -- common/autotest_common.sh@940 -- # kill -0 2798124 00:20:05.787 00:53:58 -- common/autotest_common.sh@941 -- # uname 00:20:05.787 00:53:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:05.787 00:53:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2798124 00:20:05.787 00:53:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:05.787 00:53:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:05.787 00:53:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2798124' 00:20:05.787 killing process with pid 2798124 00:20:05.787 00:53:58 -- common/autotest_common.sh@955 -- # kill 2798124 00:20:05.787 00:53:58 -- common/autotest_common.sh@960 -- # wait 2798124 00:20:06.357 00:53:58 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:06.357 00:53:58 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:06.357 00:53:58 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:06.357 00:53:58 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:06.357 00:53:58 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:06.357 00:53:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:06.357 00:53:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:06.357 00:53:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.266 00:54:00 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:08.267 00:54:00 -- target/tls.sh@18 -- # rm -f /tmp/tmp.epv6zyfCRr /tmp/tmp.BEoxeTKi2E /tmp/tmp.aACpVS6BpP 00:20:08.267 00:20:08.267 real 1m26.216s 00:20:08.267 user 2m15.383s 00:20:08.267 sys 0m23.357s 00:20:08.267 00:54:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:08.267 00:54:00 -- common/autotest_common.sh@10 -- # set +x 00:20:08.267 ************************************ 00:20:08.267 END TEST nvmf_tls 00:20:08.267 ************************************ 00:20:08.267 00:54:00 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:08.267 00:54:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:08.267 00:54:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:08.267 00:54:00 -- common/autotest_common.sh@10 -- # set +x 00:20:08.525 ************************************ 00:20:08.525 START TEST nvmf_fips 00:20:08.525 ************************************ 00:20:08.525 00:54:01 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:08.525 * Looking for test storage... 00:20:08.525 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips 00:20:08.525 00:54:01 -- fips/fips.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:08.525 00:54:01 -- nvmf/common.sh@7 -- # uname -s 00:20:08.525 00:54:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:08.525 00:54:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:08.525 00:54:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:08.525 00:54:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:08.525 00:54:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:08.525 00:54:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:08.525 00:54:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:08.525 00:54:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:08.525 00:54:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:08.525 00:54:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:08.525 00:54:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:20:08.525 00:54:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:20:08.525 00:54:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:08.525 00:54:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:08.525 00:54:01 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:08.525 00:54:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:08.525 00:54:01 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:08.525 00:54:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:08.525 00:54:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:08.525 00:54:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:08.525 00:54:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.525 00:54:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.525 00:54:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.525 00:54:01 -- paths/export.sh@5 -- # export PATH 00:20:08.525 00:54:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:08.526 00:54:01 -- nvmf/common.sh@47 -- # : 0 00:20:08.526 00:54:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:08.526 00:54:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:08.526 00:54:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:08.526 00:54:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:08.526 00:54:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:08.526 00:54:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:08.526 00:54:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:08.526 00:54:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:08.526 00:54:01 -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:20:08.526 00:54:01 -- fips/fips.sh@89 -- # check_openssl_version 00:20:08.526 00:54:01 -- fips/fips.sh@83 -- # local target=3.0.0 00:20:08.526 00:54:01 -- fips/fips.sh@85 -- # openssl version 00:20:08.526 00:54:01 -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:08.526 00:54:01 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:08.526 00:54:01 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:08.526 00:54:01 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:08.526 00:54:01 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:08.526 00:54:01 -- scripts/common.sh@333 -- # IFS=.-: 00:20:08.526 00:54:01 -- scripts/common.sh@333 -- # read -ra ver1 00:20:08.526 00:54:01 -- scripts/common.sh@334 -- # IFS=.-: 00:20:08.526 00:54:01 -- scripts/common.sh@334 -- # read -ra ver2 00:20:08.526 00:54:01 -- scripts/common.sh@335 -- # local 'op=>=' 00:20:08.526 00:54:01 -- scripts/common.sh@337 -- # ver1_l=3 00:20:08.526 00:54:01 -- scripts/common.sh@338 -- # ver2_l=3 00:20:08.526 00:54:01 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:08.526 00:54:01 -- scripts/common.sh@341 -- # case "$op" in 00:20:08.526 00:54:01 -- scripts/common.sh@345 -- # : 1 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # decimal 3 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=3 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 3 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # ver1[v]=3 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # decimal 3 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=3 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 3 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # ver2[v]=3 00:20:08.526 00:54:01 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:08.526 00:54:01 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v++ )) 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # decimal 0 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=0 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 0 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # ver1[v]=0 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # decimal 0 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=0 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 0 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:08.526 00:54:01 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:08.526 00:54:01 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v++ )) 00:20:08.526 00:54:01 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # decimal 9 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=9 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 9 00:20:08.526 00:54:01 -- scripts/common.sh@362 -- # ver1[v]=9 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # decimal 0 00:20:08.526 00:54:01 -- scripts/common.sh@350 -- # local d=0 00:20:08.526 00:54:01 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:08.526 00:54:01 -- scripts/common.sh@352 -- # echo 0 00:20:08.526 00:54:01 -- scripts/common.sh@363 -- # ver2[v]=0 00:20:08.526 00:54:01 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:08.526 00:54:01 -- scripts/common.sh@364 -- # return 0 00:20:08.526 00:54:01 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:08.526 00:54:01 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:08.526 00:54:01 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:08.526 00:54:01 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:08.526 00:54:01 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:08.526 00:54:01 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:08.526 00:54:01 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:08.526 00:54:01 -- fips/fips.sh@113 -- # build_openssl_config 00:20:08.526 00:54:01 -- fips/fips.sh@37 -- # cat 00:20:08.526 00:54:01 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:08.526 00:54:01 -- fips/fips.sh@58 -- # cat - 00:20:08.526 00:54:01 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:08.526 00:54:01 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:08.526 00:54:01 -- fips/fips.sh@116 -- # mapfile -t providers 00:20:08.526 00:54:01 -- fips/fips.sh@116 -- # openssl list -providers 00:20:08.526 00:54:01 -- fips/fips.sh@116 -- # grep name 00:20:08.526 00:54:01 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:08.526 00:54:01 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:08.526 00:54:01 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:08.526 00:54:01 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:08.526 00:54:01 -- common/autotest_common.sh@638 -- # local es=0 00:20:08.526 00:54:01 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:08.526 00:54:01 -- common/autotest_common.sh@626 -- # local arg=openssl 00:20:08.526 00:54:01 -- fips/fips.sh@127 -- # : 00:20:08.526 00:54:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:08.526 00:54:01 -- common/autotest_common.sh@630 -- # type -t openssl 00:20:08.526 00:54:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:08.526 00:54:01 -- common/autotest_common.sh@632 -- # type -P openssl 00:20:08.526 00:54:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:08.526 00:54:01 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:20:08.526 00:54:01 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:20:08.526 00:54:01 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:20:08.785 Error setting digest 00:20:08.785 0032BAE1727F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:08.785 0032BAE1727F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:08.785 00:54:01 -- common/autotest_common.sh@641 -- # es=1 00:20:08.785 00:54:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:08.785 00:54:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:08.785 00:54:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:08.785 00:54:01 -- fips/fips.sh@130 -- # nvmftestinit 00:20:08.785 00:54:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:08.785 00:54:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.785 00:54:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:08.785 00:54:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:08.785 00:54:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:08.785 00:54:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:08.785 00:54:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:08.785 00:54:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:08.785 00:54:01 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:08.785 00:54:01 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:08.785 00:54:01 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:08.785 00:54:01 -- common/autotest_common.sh@10 -- # set +x 00:20:14.075 00:54:06 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:14.075 00:54:06 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.075 00:54:06 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.075 00:54:06 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.075 00:54:06 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.075 00:54:06 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.075 00:54:06 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.075 00:54:06 -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.075 00:54:06 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.075 00:54:06 -- nvmf/common.sh@296 -- # e810=() 00:20:14.075 00:54:06 -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.075 00:54:06 -- nvmf/common.sh@297 -- # x722=() 00:20:14.075 00:54:06 -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.075 00:54:06 -- nvmf/common.sh@298 -- # mlx=() 00:20:14.075 00:54:06 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.075 00:54:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.075 00:54:06 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.075 00:54:06 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.076 00:54:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:14.076 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:14.076 00:54:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.076 00:54:06 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:14.076 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:14.076 00:54:06 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.076 00:54:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.076 00:54:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.076 00:54:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:14.076 Found net devices under 0000:27:00.0: cvl_0_0 00:20:14.076 00:54:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.076 00:54:06 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.076 00:54:06 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.076 00:54:06 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.076 00:54:06 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:14.076 Found net devices under 0000:27:00.1: cvl_0_1 00:20:14.076 00:54:06 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.076 00:54:06 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:14.076 00:54:06 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:14.076 00:54:06 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:14.076 00:54:06 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.076 00:54:06 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.076 00:54:06 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.076 00:54:06 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.076 00:54:06 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.076 00:54:06 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.076 00:54:06 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.076 00:54:06 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.076 00:54:06 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.076 00:54:06 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.076 00:54:06 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.076 00:54:06 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.076 00:54:06 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.336 00:54:06 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.336 00:54:06 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.336 00:54:06 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.336 00:54:06 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.336 00:54:06 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.336 00:54:06 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.336 00:54:06 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:20:14.336 00:20:14.336 --- 10.0.0.2 ping statistics --- 00:20:14.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.336 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:20:14.336 00:54:06 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:20:14.336 00:20:14.336 --- 10.0.0.1 ping statistics --- 00:20:14.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.336 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:20:14.336 00:54:06 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.336 00:54:06 -- nvmf/common.sh@411 -- # return 0 00:20:14.336 00:54:06 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:14.336 00:54:06 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.336 00:54:06 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:14.336 00:54:06 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:14.336 00:54:06 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.336 00:54:06 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:14.336 00:54:06 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:14.336 00:54:06 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:14.336 00:54:06 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:14.336 00:54:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:14.336 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:20:14.336 00:54:06 -- nvmf/common.sh@470 -- # nvmfpid=2803462 00:20:14.336 00:54:06 -- nvmf/common.sh@471 -- # waitforlisten 2803462 00:20:14.336 00:54:06 -- common/autotest_common.sh@817 -- # '[' -z 2803462 ']' 00:20:14.336 00:54:06 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:14.336 00:54:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.336 00:54:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:14.336 00:54:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.336 00:54:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:14.336 00:54:06 -- common/autotest_common.sh@10 -- # set +x 00:20:14.336 [2024-04-27 00:54:07.004950] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:14.336 [2024-04-27 00:54:07.005064] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.596 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.596 [2024-04-27 00:54:07.153849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.857 [2024-04-27 00:54:07.297280] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.857 [2024-04-27 00:54:07.297347] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.857 [2024-04-27 00:54:07.297365] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.857 [2024-04-27 00:54:07.297379] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.857 [2024-04-27 00:54:07.297392] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.857 [2024-04-27 00:54:07.297454] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:15.117 00:54:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:15.117 00:54:07 -- common/autotest_common.sh@850 -- # return 0 00:20:15.117 00:54:07 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:15.117 00:54:07 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:15.117 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.117 00:54:07 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:15.117 00:54:07 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:15.117 00:54:07 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:15.117 00:54:07 -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:15.118 00:54:07 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:15.118 00:54:07 -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:15.118 00:54:07 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:15.118 00:54:07 -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:15.118 00:54:07 -- fips/fips.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:20:15.377 [2024-04-27 00:54:07.869136] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:15.377 [2024-04-27 00:54:07.885076] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:15.377 [2024-04-27 00:54:07.885491] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.377 [2024-04-27 00:54:07.948243] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:15.377 malloc0 00:20:15.377 00:54:07 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:15.377 00:54:07 -- fips/fips.sh@147 -- # bdevperf_pid=2803538 00:20:15.377 00:54:07 -- fips/fips.sh@148 -- # waitforlisten 2803538 /var/tmp/bdevperf.sock 00:20:15.377 00:54:07 -- common/autotest_common.sh@817 -- # '[' -z 2803538 ']' 00:20:15.377 00:54:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:15.377 00:54:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:15.378 00:54:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:15.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:15.378 00:54:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:15.378 00:54:07 -- common/autotest_common.sh@10 -- # set +x 00:20:15.378 00:54:07 -- fips/fips.sh@145 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:15.636 [2024-04-27 00:54:08.094230] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:15.636 [2024-04-27 00:54:08.094363] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2803538 ] 00:20:15.636 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.636 [2024-04-27 00:54:08.218360] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:15.636 [2024-04-27 00:54:08.311240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:16.203 00:54:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:16.203 00:54:08 -- common/autotest_common.sh@850 -- # return 0 00:20:16.203 00:54:08 -- fips/fips.sh@150 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:16.203 [2024-04-27 00:54:08.860326] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:16.203 [2024-04-27 00:54:08.860445] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:16.463 TLSTESTn1 00:20:16.463 00:54:08 -- fips/fips.sh@154 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:16.463 Running I/O for 10 seconds... 00:20:26.498 00:20:26.498 Latency(us) 00:20:26.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.498 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:26.498 Verification LBA range: start 0x0 length 0x2000 00:20:26.498 TLSTESTn1 : 10.03 5483.15 21.42 0.00 0.00 23296.99 5760.27 53256.62 00:20:26.498 =================================================================================================================== 00:20:26.498 Total : 5483.15 21.42 0.00 0.00 23296.99 5760.27 53256.62 00:20:26.498 0 00:20:26.498 00:54:19 -- fips/fips.sh@1 -- # cleanup 00:20:26.498 00:54:19 -- fips/fips.sh@15 -- # process_shm --id 0 00:20:26.498 00:54:19 -- common/autotest_common.sh@794 -- # type=--id 00:20:26.498 00:54:19 -- common/autotest_common.sh@795 -- # id=0 00:20:26.498 00:54:19 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:20:26.498 00:54:19 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:26.498 00:54:19 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:20:26.498 00:54:19 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:20:26.498 00:54:19 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:20:26.498 00:54:19 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:26.498 nvmf_trace.0 00:20:26.498 00:54:19 -- common/autotest_common.sh@809 -- # return 0 00:20:26.498 00:54:19 -- fips/fips.sh@16 -- # killprocess 2803538 00:20:26.498 00:54:19 -- common/autotest_common.sh@936 -- # '[' -z 2803538 ']' 00:20:26.498 00:54:19 -- common/autotest_common.sh@940 -- # kill -0 2803538 00:20:26.498 00:54:19 -- common/autotest_common.sh@941 -- # uname 00:20:26.498 00:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:26.498 00:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2803538 00:20:26.498 00:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:20:26.498 00:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:20:26.498 00:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2803538' 00:20:26.498 killing process with pid 2803538 00:20:26.498 00:54:19 -- common/autotest_common.sh@955 -- # kill 2803538 00:20:26.498 Received shutdown signal, test time was about 10.000000 seconds 00:20:26.498 00:20:26.498 Latency(us) 00:20:26.498 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.498 =================================================================================================================== 00:20:26.498 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:26.498 [2024-04-27 00:54:19.168769] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:26.498 00:54:19 -- common/autotest_common.sh@960 -- # wait 2803538 00:20:27.065 00:54:19 -- fips/fips.sh@17 -- # nvmftestfini 00:20:27.065 00:54:19 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:27.065 00:54:19 -- nvmf/common.sh@117 -- # sync 00:20:27.065 00:54:19 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.065 00:54:19 -- nvmf/common.sh@120 -- # set +e 00:20:27.065 00:54:19 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.065 00:54:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.065 rmmod nvme_tcp 00:20:27.065 rmmod nvme_fabrics 00:20:27.065 rmmod nvme_keyring 00:20:27.065 00:54:19 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.065 00:54:19 -- nvmf/common.sh@124 -- # set -e 00:20:27.065 00:54:19 -- nvmf/common.sh@125 -- # return 0 00:20:27.065 00:54:19 -- nvmf/common.sh@478 -- # '[' -n 2803462 ']' 00:20:27.065 00:54:19 -- nvmf/common.sh@479 -- # killprocess 2803462 00:20:27.065 00:54:19 -- common/autotest_common.sh@936 -- # '[' -z 2803462 ']' 00:20:27.065 00:54:19 -- common/autotest_common.sh@940 -- # kill -0 2803462 00:20:27.065 00:54:19 -- common/autotest_common.sh@941 -- # uname 00:20:27.065 00:54:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:27.065 00:54:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2803462 00:20:27.065 00:54:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:27.065 00:54:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:27.065 00:54:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2803462' 00:20:27.065 killing process with pid 2803462 00:20:27.065 00:54:19 -- common/autotest_common.sh@955 -- # kill 2803462 00:20:27.065 [2024-04-27 00:54:19.671775] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:27.065 00:54:19 -- common/autotest_common.sh@960 -- # wait 2803462 00:20:27.630 00:54:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:27.630 00:54:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:27.630 00:54:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:27.630 00:54:20 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.630 00:54:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.630 00:54:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.630 00:54:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.630 00:54:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.166 00:54:22 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.166 00:54:22 -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:30.166 00:20:30.166 real 0m21.204s 00:20:30.166 user 0m24.296s 00:20:30.166 sys 0m7.475s 00:20:30.166 00:54:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:30.166 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.166 ************************************ 00:20:30.166 END TEST nvmf_fips 00:20:30.166 ************************************ 00:20:30.166 00:54:22 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:20:30.166 00:54:22 -- nvmf/nvmf.sh@70 -- # [[ phy-fallback == phy ]] 00:20:30.166 00:54:22 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:20:30.166 00:54:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:30.166 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.166 00:54:22 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:20:30.167 00:54:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:30.167 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.167 00:54:22 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:20:30.167 00:54:22 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:30.167 00:54:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:30.167 00:54:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:30.167 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:30.167 ************************************ 00:20:30.167 START TEST nvmf_multicontroller 00:20:30.167 ************************************ 00:20:30.167 00:54:22 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:30.167 * Looking for test storage... 00:20:30.167 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:20:30.167 00:54:22 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.167 00:54:22 -- nvmf/common.sh@7 -- # uname -s 00:20:30.167 00:54:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.167 00:54:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.167 00:54:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.167 00:54:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.167 00:54:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.167 00:54:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.167 00:54:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.167 00:54:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.167 00:54:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.167 00:54:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.167 00:54:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:20:30.167 00:54:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:20:30.167 00:54:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.167 00:54:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.167 00:54:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:30.167 00:54:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.167 00:54:22 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:30.167 00:54:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.167 00:54:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.167 00:54:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.167 00:54:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.167 00:54:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.167 00:54:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.167 00:54:22 -- paths/export.sh@5 -- # export PATH 00:20:30.167 00:54:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.167 00:54:22 -- nvmf/common.sh@47 -- # : 0 00:20:30.167 00:54:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.167 00:54:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.167 00:54:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.167 00:54:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.167 00:54:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.167 00:54:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.167 00:54:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.167 00:54:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.167 00:54:22 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.167 00:54:22 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.167 00:54:22 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:30.167 00:54:22 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:30.167 00:54:22 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:30.167 00:54:22 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:30.167 00:54:22 -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:30.167 00:54:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:30.167 00:54:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.167 00:54:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:30.167 00:54:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:30.167 00:54:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:30.167 00:54:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.167 00:54:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.167 00:54:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.167 00:54:22 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:30.167 00:54:22 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:30.167 00:54:22 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.167 00:54:22 -- common/autotest_common.sh@10 -- # set +x 00:20:35.525 00:54:27 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:35.525 00:54:27 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:35.525 00:54:27 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:35.525 00:54:27 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:35.525 00:54:27 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:35.525 00:54:27 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:35.525 00:54:27 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:35.525 00:54:27 -- nvmf/common.sh@295 -- # net_devs=() 00:20:35.525 00:54:27 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:35.525 00:54:27 -- nvmf/common.sh@296 -- # e810=() 00:20:35.525 00:54:27 -- nvmf/common.sh@296 -- # local -ga e810 00:20:35.525 00:54:27 -- nvmf/common.sh@297 -- # x722=() 00:20:35.525 00:54:27 -- nvmf/common.sh@297 -- # local -ga x722 00:20:35.525 00:54:27 -- nvmf/common.sh@298 -- # mlx=() 00:20:35.525 00:54:27 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:35.525 00:54:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:35.525 00:54:27 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:35.525 00:54:27 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.525 00:54:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:35.525 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:35.525 00:54:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:35.525 00:54:27 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:35.525 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:35.525 00:54:27 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.525 00:54:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.525 00:54:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.525 00:54:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:35.525 Found net devices under 0000:27:00.0: cvl_0_0 00:20:35.525 00:54:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.525 00:54:27 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:35.525 00:54:27 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:35.525 00:54:27 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:35.525 00:54:27 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:35.525 Found net devices under 0000:27:00.1: cvl_0_1 00:20:35.525 00:54:27 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:35.525 00:54:27 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:35.525 00:54:27 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:35.525 00:54:27 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:35.525 00:54:27 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:35.525 00:54:27 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:35.525 00:54:27 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:35.525 00:54:27 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:35.525 00:54:27 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:35.525 00:54:27 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:35.525 00:54:27 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:35.525 00:54:27 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:35.525 00:54:27 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:35.525 00:54:27 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:35.525 00:54:27 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:35.525 00:54:27 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:35.525 00:54:27 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:35.525 00:54:27 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:35.525 00:54:27 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:35.525 00:54:27 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:35.525 00:54:27 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:35.525 00:54:27 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:35.525 00:54:27 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:35.525 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:35.525 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:20:35.525 00:20:35.525 --- 10.0.0.2 ping statistics --- 00:20:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.525 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:20:35.525 00:54:27 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:35.525 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:35.525 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:20:35.525 00:20:35.525 --- 10.0.0.1 ping statistics --- 00:20:35.525 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:35.525 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:20:35.525 00:54:27 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:35.525 00:54:27 -- nvmf/common.sh@411 -- # return 0 00:20:35.525 00:54:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:35.525 00:54:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:35.525 00:54:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:35.525 00:54:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:35.525 00:54:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:35.525 00:54:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:35.525 00:54:27 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:35.525 00:54:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:35.525 00:54:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:35.525 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.525 00:54:27 -- nvmf/common.sh@470 -- # nvmfpid=2809833 00:20:35.525 00:54:27 -- nvmf/common.sh@471 -- # waitforlisten 2809833 00:20:35.525 00:54:27 -- common/autotest_common.sh@817 -- # '[' -z 2809833 ']' 00:20:35.525 00:54:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:35.525 00:54:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:35.525 00:54:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:35.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:35.525 00:54:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:35.525 00:54:27 -- common/autotest_common.sh@10 -- # set +x 00:20:35.525 00:54:27 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:35.525 [2024-04-27 00:54:27.808868] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:35.525 [2024-04-27 00:54:27.808973] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:35.525 EAL: No free 2048 kB hugepages reported on node 1 00:20:35.525 [2024-04-27 00:54:27.955753] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:35.525 [2024-04-27 00:54:28.135754] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:35.526 [2024-04-27 00:54:28.135831] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:35.526 [2024-04-27 00:54:28.135849] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:35.526 [2024-04-27 00:54:28.135867] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:35.526 [2024-04-27 00:54:28.135881] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:35.526 [2024-04-27 00:54:28.136119] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:35.526 [2024-04-27 00:54:28.136273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:35.526 [2024-04-27 00:54:28.136277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:36.096 00:54:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:36.096 00:54:28 -- common/autotest_common.sh@850 -- # return 0 00:20:36.096 00:54:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:36.096 00:54:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:36.096 00:54:28 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 [2024-04-27 00:54:28.571815] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 Malloc0 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 [2024-04-27 00:54:28.676614] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 [2024-04-27 00:54:28.684516] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 Malloc1 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:36.096 00:54:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:36.096 00:54:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:36.096 00:54:28 -- host/multicontroller.sh@44 -- # bdevperf_pid=2810144 00:20:36.096 00:54:28 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:36.096 00:54:28 -- host/multicontroller.sh@47 -- # waitforlisten 2810144 /var/tmp/bdevperf.sock 00:20:36.096 00:54:28 -- common/autotest_common.sh@817 -- # '[' -z 2810144 ']' 00:20:36.096 00:54:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:36.096 00:54:28 -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:36.096 00:54:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:36.096 00:54:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:36.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:36.096 00:54:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:36.096 00:54:28 -- common/autotest_common.sh@10 -- # set +x 00:20:37.032 00:54:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:37.032 00:54:29 -- common/autotest_common.sh@850 -- # return 0 00:20:37.032 00:54:29 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:37.032 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.032 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 NVMe0n1 00:20:37.292 00:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.292 00:54:29 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.292 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.292 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 00:54:29 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:37.292 00:54:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.292 1 00:20:37.292 00:54:29 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.292 00:54:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:37.292 00:54:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.292 00:54:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:37.292 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.292 00:54:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:37.292 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.292 00:54:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:37.292 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.292 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.292 request: 00:20:37.292 { 00:20:37.292 "name": "NVMe0", 00:20:37.292 "trtype": "tcp", 00:20:37.292 "traddr": "10.0.0.2", 00:20:37.292 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:37.292 "hostaddr": "10.0.0.2", 00:20:37.292 "hostsvcid": "60000", 00:20:37.292 "adrfam": "ipv4", 00:20:37.292 "trsvcid": "4420", 00:20:37.292 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.292 "method": "bdev_nvme_attach_controller", 00:20:37.292 "req_id": 1 00:20:37.292 } 00:20:37.292 Got JSON-RPC error response 00:20:37.292 response: 00:20:37.292 { 00:20:37.292 "code": -114, 00:20:37.292 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:37.292 } 00:20:37.292 00:54:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:37.292 00:54:29 -- common/autotest_common.sh@641 -- # es=1 00:20:37.293 00:54:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:37.293 00:54:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:37.293 00:54:29 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.293 00:54:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:37.293 00:54:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.293 00:54:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:37.293 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 request: 00:20:37.293 { 00:20:37.293 "name": "NVMe0", 00:20:37.293 "trtype": "tcp", 00:20:37.293 "traddr": "10.0.0.2", 00:20:37.293 "hostaddr": "10.0.0.2", 00:20:37.293 "hostsvcid": "60000", 00:20:37.293 "adrfam": "ipv4", 00:20:37.293 "trsvcid": "4420", 00:20:37.293 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:37.293 "method": "bdev_nvme_attach_controller", 00:20:37.293 "req_id": 1 00:20:37.293 } 00:20:37.293 Got JSON-RPC error response 00:20:37.293 response: 00:20:37.293 { 00:20:37.293 "code": -114, 00:20:37.293 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:37.293 } 00:20:37.293 00:54:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # es=1 00:20:37.293 00:54:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:37.293 00:54:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:37.293 00:54:29 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:37.293 00:54:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 request: 00:20:37.293 { 00:20:37.293 "name": "NVMe0", 00:20:37.293 "trtype": "tcp", 00:20:37.293 "traddr": "10.0.0.2", 00:20:37.293 "hostaddr": "10.0.0.2", 00:20:37.293 "hostsvcid": "60000", 00:20:37.293 "adrfam": "ipv4", 00:20:37.293 "trsvcid": "4420", 00:20:37.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.293 "multipath": "disable", 00:20:37.293 "method": "bdev_nvme_attach_controller", 00:20:37.293 "req_id": 1 00:20:37.293 } 00:20:37.293 Got JSON-RPC error response 00:20:37.293 response: 00:20:37.293 { 00:20:37.293 "code": -114, 00:20:37.293 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:37.293 } 00:20:37.293 00:54:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # es=1 00:20:37.293 00:54:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:37.293 00:54:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:37.293 00:54:29 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.293 00:54:29 -- common/autotest_common.sh@638 -- # local es=0 00:20:37.293 00:54:29 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.293 00:54:29 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:20:37.293 00:54:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:37.293 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.293 request: 00:20:37.293 { 00:20:37.293 "name": "NVMe0", 00:20:37.293 "trtype": "tcp", 00:20:37.293 "traddr": "10.0.0.2", 00:20:37.293 "hostaddr": "10.0.0.2", 00:20:37.293 "hostsvcid": "60000", 00:20:37.293 "adrfam": "ipv4", 00:20:37.293 "trsvcid": "4420", 00:20:37.293 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.293 "multipath": "failover", 00:20:37.293 "method": "bdev_nvme_attach_controller", 00:20:37.293 "req_id": 1 00:20:37.293 } 00:20:37.293 Got JSON-RPC error response 00:20:37.293 response: 00:20:37.293 { 00:20:37.293 "code": -114, 00:20:37.293 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:37.293 } 00:20:37.293 00:54:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@641 -- # es=1 00:20:37.293 00:54:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:37.293 00:54:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:37.293 00:54:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:37.293 00:54:29 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.293 00:54:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.293 00:54:29 -- common/autotest_common.sh@10 -- # set +x 00:20:37.553 00:20:37.553 00:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.553 00:54:30 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:37.553 00:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.553 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.553 00:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.553 00:54:30 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:37.553 00:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.553 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.553 00:20:37.553 00:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.553 00:54:30 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:37.553 00:54:30 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:37.553 00:54:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:37.553 00:54:30 -- common/autotest_common.sh@10 -- # set +x 00:20:37.553 00:54:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:37.553 00:54:30 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:37.553 00:54:30 -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:38.930 0 00:20:38.930 00:54:31 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:38.930 00:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:38.930 00:54:31 -- common/autotest_common.sh@10 -- # set +x 00:20:38.930 00:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:38.930 00:54:31 -- host/multicontroller.sh@100 -- # killprocess 2810144 00:20:38.930 00:54:31 -- common/autotest_common.sh@936 -- # '[' -z 2810144 ']' 00:20:38.930 00:54:31 -- common/autotest_common.sh@940 -- # kill -0 2810144 00:20:38.930 00:54:31 -- common/autotest_common.sh@941 -- # uname 00:20:38.930 00:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:38.930 00:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2810144 00:20:38.930 00:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:38.930 00:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:38.930 00:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2810144' 00:20:38.930 killing process with pid 2810144 00:20:38.930 00:54:31 -- common/autotest_common.sh@955 -- # kill 2810144 00:20:38.930 00:54:31 -- common/autotest_common.sh@960 -- # wait 2810144 00:20:39.190 00:54:31 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:39.190 00:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.190 00:54:31 -- common/autotest_common.sh@10 -- # set +x 00:20:39.190 00:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.190 00:54:31 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:39.190 00:54:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:39.190 00:54:31 -- common/autotest_common.sh@10 -- # set +x 00:20:39.190 00:54:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:39.190 00:54:31 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:39.190 00:54:31 -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:39.190 00:54:31 -- common/autotest_common.sh@1598 -- # read -r file 00:20:39.190 00:54:31 -- common/autotest_common.sh@1597 -- # find /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:39.190 00:54:31 -- common/autotest_common.sh@1597 -- # sort -u 00:20:39.190 00:54:31 -- common/autotest_common.sh@1599 -- # cat 00:20:39.190 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:39.190 [2024-04-27 00:54:28.851707] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:39.190 [2024-04-27 00:54:28.851843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810144 ] 00:20:39.190 EAL: No free 2048 kB hugepages reported on node 1 00:20:39.190 [2024-04-27 00:54:28.970469] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:39.190 [2024-04-27 00:54:29.060526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.190 [2024-04-27 00:54:30.180654] bdev.c:4551:bdev_name_add: *ERROR*: Bdev name 2a30de5e-245a-4903-9a77-41e91f9f851b already exists 00:20:39.190 [2024-04-27 00:54:30.180704] bdev.c:7668:bdev_register: *ERROR*: Unable to add uuid:2a30de5e-245a-4903-9a77-41e91f9f851b alias for bdev NVMe1n1 00:20:39.190 [2024-04-27 00:54:30.180722] bdev_nvme.c:4276:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:39.190 Running I/O for 1 seconds... 00:20:39.190 00:20:39.190 Latency(us) 00:20:39.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.190 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:39.190 NVMe0n1 : 1.00 25300.61 98.83 0.00 0.00 5053.25 3190.57 10761.70 00:20:39.190 =================================================================================================================== 00:20:39.190 Total : 25300.61 98.83 0.00 0.00 5053.25 3190.57 10761.70 00:20:39.190 Received shutdown signal, test time was about 1.000000 seconds 00:20:39.190 00:20:39.190 Latency(us) 00:20:39.190 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.190 =================================================================================================================== 00:20:39.190 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:39.190 --- /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:39.190 00:54:31 -- common/autotest_common.sh@1604 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:39.190 00:54:31 -- common/autotest_common.sh@1598 -- # read -r file 00:20:39.190 00:54:31 -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:39.190 00:54:31 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:39.190 00:54:31 -- nvmf/common.sh@117 -- # sync 00:20:39.190 00:54:31 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:39.190 00:54:31 -- nvmf/common.sh@120 -- # set +e 00:20:39.190 00:54:31 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:39.190 00:54:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:39.190 rmmod nvme_tcp 00:20:39.190 rmmod nvme_fabrics 00:20:39.190 rmmod nvme_keyring 00:20:39.190 00:54:31 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:39.190 00:54:31 -- nvmf/common.sh@124 -- # set -e 00:20:39.190 00:54:31 -- nvmf/common.sh@125 -- # return 0 00:20:39.190 00:54:31 -- nvmf/common.sh@478 -- # '[' -n 2809833 ']' 00:20:39.190 00:54:31 -- nvmf/common.sh@479 -- # killprocess 2809833 00:20:39.190 00:54:31 -- common/autotest_common.sh@936 -- # '[' -z 2809833 ']' 00:20:39.190 00:54:31 -- common/autotest_common.sh@940 -- # kill -0 2809833 00:20:39.190 00:54:31 -- common/autotest_common.sh@941 -- # uname 00:20:39.190 00:54:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:39.190 00:54:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2809833 00:20:39.450 00:54:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:39.450 00:54:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:39.450 00:54:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2809833' 00:20:39.450 killing process with pid 2809833 00:20:39.450 00:54:31 -- common/autotest_common.sh@955 -- # kill 2809833 00:20:39.450 00:54:31 -- common/autotest_common.sh@960 -- # wait 2809833 00:20:40.021 00:54:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:40.021 00:54:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:40.021 00:54:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:40.021 00:54:32 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:40.021 00:54:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:40.021 00:54:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:40.021 00:54:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:40.021 00:54:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:41.928 00:54:34 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:41.928 00:20:41.928 real 0m12.156s 00:20:41.928 user 0m17.327s 00:20:41.928 sys 0m4.813s 00:20:41.928 00:54:34 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:41.928 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:20:41.928 ************************************ 00:20:41.928 END TEST nvmf_multicontroller 00:20:41.928 ************************************ 00:20:41.928 00:54:34 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:41.928 00:54:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:41.928 00:54:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:41.928 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:20:42.188 ************************************ 00:20:42.188 START TEST nvmf_aer 00:20:42.188 ************************************ 00:20:42.188 00:54:34 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:42.188 * Looking for test storage... 00:20:42.188 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:20:42.188 00:54:34 -- host/aer.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.188 00:54:34 -- nvmf/common.sh@7 -- # uname -s 00:20:42.188 00:54:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.188 00:54:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.188 00:54:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.188 00:54:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.188 00:54:34 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.188 00:54:34 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.188 00:54:34 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.188 00:54:34 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.188 00:54:34 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.188 00:54:34 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.188 00:54:34 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:20:42.188 00:54:34 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:20:42.188 00:54:34 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.188 00:54:34 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.188 00:54:34 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:42.188 00:54:34 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:42.188 00:54:34 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:42.188 00:54:34 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.188 00:54:34 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.189 00:54:34 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.189 00:54:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.189 00:54:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.189 00:54:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.189 00:54:34 -- paths/export.sh@5 -- # export PATH 00:20:42.189 00:54:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.189 00:54:34 -- nvmf/common.sh@47 -- # : 0 00:20:42.189 00:54:34 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:42.189 00:54:34 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:42.189 00:54:34 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:42.189 00:54:34 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.189 00:54:34 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.189 00:54:34 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:42.189 00:54:34 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:42.189 00:54:34 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:42.189 00:54:34 -- host/aer.sh@11 -- # nvmftestinit 00:20:42.189 00:54:34 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:42.189 00:54:34 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.189 00:54:34 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:42.189 00:54:34 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:42.189 00:54:34 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:42.189 00:54:34 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.189 00:54:34 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.189 00:54:34 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.189 00:54:34 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:42.189 00:54:34 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:42.189 00:54:34 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.189 00:54:34 -- common/autotest_common.sh@10 -- # set +x 00:20:48.762 00:54:41 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:48.762 00:54:41 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:48.762 00:54:41 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:48.762 00:54:41 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:48.762 00:54:41 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:48.762 00:54:41 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:48.762 00:54:41 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:48.762 00:54:41 -- nvmf/common.sh@295 -- # net_devs=() 00:20:48.762 00:54:41 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:48.762 00:54:41 -- nvmf/common.sh@296 -- # e810=() 00:20:48.762 00:54:41 -- nvmf/common.sh@296 -- # local -ga e810 00:20:48.762 00:54:41 -- nvmf/common.sh@297 -- # x722=() 00:20:48.762 00:54:41 -- nvmf/common.sh@297 -- # local -ga x722 00:20:48.762 00:54:41 -- nvmf/common.sh@298 -- # mlx=() 00:20:48.762 00:54:41 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:48.762 00:54:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.762 00:54:41 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:48.762 00:54:41 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.762 00:54:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:48.762 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:48.762 00:54:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:48.762 00:54:41 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:48.762 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:48.762 00:54:41 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.762 00:54:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.762 00:54:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.762 00:54:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:48.762 Found net devices under 0000:27:00.0: cvl_0_0 00:20:48.762 00:54:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.762 00:54:41 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:48.762 00:54:41 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.762 00:54:41 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.762 00:54:41 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:48.762 Found net devices under 0000:27:00.1: cvl_0_1 00:20:48.762 00:54:41 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.762 00:54:41 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:48.762 00:54:41 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:48.762 00:54:41 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:48.762 00:54:41 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.762 00:54:41 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.762 00:54:41 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:48.762 00:54:41 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:48.762 00:54:41 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:48.762 00:54:41 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:48.762 00:54:41 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:48.762 00:54:41 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:48.762 00:54:41 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.762 00:54:41 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:48.762 00:54:41 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:48.762 00:54:41 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:48.762 00:54:41 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:48.763 00:54:41 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:48.763 00:54:41 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:48.763 00:54:41 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:48.763 00:54:41 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:48.763 00:54:41 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:48.763 00:54:41 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:48.763 00:54:41 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:48.763 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.763 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:20:48.763 00:20:48.763 --- 10.0.0.2 ping statistics --- 00:20:48.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.763 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:20:48.763 00:54:41 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:48.763 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.763 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.071 ms 00:20:48.763 00:20:48.763 --- 10.0.0.1 ping statistics --- 00:20:48.763 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.763 rtt min/avg/max/mdev = 0.071/0.071/0.071/0.000 ms 00:20:48.763 00:54:41 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.763 00:54:41 -- nvmf/common.sh@411 -- # return 0 00:20:48.763 00:54:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:48.763 00:54:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.763 00:54:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:48.763 00:54:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:48.763 00:54:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.763 00:54:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:48.763 00:54:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:48.763 00:54:41 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:48.763 00:54:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:48.763 00:54:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:48.763 00:54:41 -- common/autotest_common.sh@10 -- # set +x 00:20:48.763 00:54:41 -- nvmf/common.sh@470 -- # nvmfpid=2814953 00:20:48.763 00:54:41 -- nvmf/common.sh@471 -- # waitforlisten 2814953 00:20:48.763 00:54:41 -- common/autotest_common.sh@817 -- # '[' -z 2814953 ']' 00:20:48.763 00:54:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.763 00:54:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:48.763 00:54:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.763 00:54:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:48.763 00:54:41 -- common/autotest_common.sh@10 -- # set +x 00:20:48.763 00:54:41 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:48.763 [2024-04-27 00:54:41.446234] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:48.763 [2024-04-27 00:54:41.446356] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:49.024 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.024 [2024-04-27 00:54:41.581119] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:49.024 [2024-04-27 00:54:41.676606] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:49.024 [2024-04-27 00:54:41.676659] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:49.024 [2024-04-27 00:54:41.676671] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:49.024 [2024-04-27 00:54:41.676680] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:49.024 [2024-04-27 00:54:41.676688] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:49.024 [2024-04-27 00:54:41.676797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:49.024 [2024-04-27 00:54:41.676893] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.024 [2024-04-27 00:54:41.676992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:49.024 [2024-04-27 00:54:41.677004] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:49.595 00:54:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:49.595 00:54:42 -- common/autotest_common.sh@850 -- # return 0 00:20:49.595 00:54:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:49.595 00:54:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 00:54:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.595 00:54:42 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 [2024-04-27 00:54:42.209760] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:49.595 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.595 00:54:42 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 Malloc0 00:20:49.595 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.595 00:54:42 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.595 00:54:42 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.595 00:54:42 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.595 [2024-04-27 00:54:42.275756] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:49.595 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.595 00:54:42 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:49.595 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:49.595 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:49.596 [2024-04-27 00:54:42.283405] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:20:49.596 [ 00:20:49.596 { 00:20:49.596 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:49.596 "subtype": "Discovery", 00:20:49.596 "listen_addresses": [], 00:20:49.596 "allow_any_host": true, 00:20:49.596 "hosts": [] 00:20:49.596 }, 00:20:49.596 { 00:20:49.596 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.596 "subtype": "NVMe", 00:20:49.596 "listen_addresses": [ 00:20:49.596 { 00:20:49.596 "transport": "TCP", 00:20:49.596 "trtype": "TCP", 00:20:49.596 "adrfam": "IPv4", 00:20:49.596 "traddr": "10.0.0.2", 00:20:49.596 "trsvcid": "4420" 00:20:49.596 } 00:20:49.596 ], 00:20:49.596 "allow_any_host": true, 00:20:49.596 "hosts": [], 00:20:49.596 "serial_number": "SPDK00000000000001", 00:20:49.596 "model_number": "SPDK bdev Controller", 00:20:49.596 "max_namespaces": 2, 00:20:49.596 "min_cntlid": 1, 00:20:49.596 "max_cntlid": 65519, 00:20:49.596 "namespaces": [ 00:20:49.596 { 00:20:49.596 "nsid": 1, 00:20:49.596 "bdev_name": "Malloc0", 00:20:49.596 "name": "Malloc0", 00:20:49.596 "nguid": "EBB29F2F59734B8A8AB1AAD56F96DAE2", 00:20:49.596 "uuid": "ebb29f2f-5973-4b8a-8ab1-aad56f96dae2" 00:20:49.596 } 00:20:49.596 ] 00:20:49.596 } 00:20:49.596 ] 00:20:49.596 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:49.596 00:54:42 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:49.596 00:54:42 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:49.856 00:54:42 -- host/aer.sh@33 -- # aerpid=2815145 00:20:49.856 00:54:42 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:49.856 00:54:42 -- common/autotest_common.sh@1251 -- # local i=0 00:20:49.856 00:54:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1254 -- # i=1 00:20:49.856 00:54:42 -- host/aer.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:49.856 00:54:42 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:49.856 00:54:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1254 -- # i=2 00:20:49.856 00:54:42 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:49.856 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.856 00:54:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1253 -- # '[' 2 -lt 200 ']' 00:20:49.856 00:54:42 -- common/autotest_common.sh@1254 -- # i=3 00:20:49.856 00:54:42 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:50.117 00:54:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:50.117 00:54:42 -- common/autotest_common.sh@1253 -- # '[' 3 -lt 200 ']' 00:20:50.117 00:54:42 -- common/autotest_common.sh@1254 -- # i=4 00:20:50.117 00:54:42 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:20:50.117 00:54:42 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:50.117 00:54:42 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:50.117 00:54:42 -- common/autotest_common.sh@1262 -- # return 0 00:20:50.117 00:54:42 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:50.117 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.117 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.117 Malloc1 00:20:50.117 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.117 00:54:42 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:50.117 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.117 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.117 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.117 00:54:42 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:50.117 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.117 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.117 [ 00:20:50.117 { 00:20:50.117 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:50.117 "subtype": "Discovery", 00:20:50.117 "listen_addresses": [], 00:20:50.117 "allow_any_host": true, 00:20:50.117 "hosts": [] 00:20:50.117 }, 00:20:50.117 { 00:20:50.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.117 "subtype": "NVMe", 00:20:50.117 "listen_addresses": [ 00:20:50.117 { 00:20:50.117 "transport": "TCP", 00:20:50.117 "trtype": "TCP", 00:20:50.117 "adrfam": "IPv4", 00:20:50.117 "traddr": "10.0.0.2", 00:20:50.117 "trsvcid": "4420" 00:20:50.117 } 00:20:50.117 ], 00:20:50.117 "allow_any_host": true, 00:20:50.117 "hosts": [], 00:20:50.117 "serial_number": "SPDK00000000000001", 00:20:50.117 "model_number": "SPDK bdev Controller", 00:20:50.117 "max_namespaces": 2, 00:20:50.117 "min_cntlid": 1, 00:20:50.117 "max_cntlid": 65519, 00:20:50.117 "namespaces": [ 00:20:50.117 { 00:20:50.117 "nsid": 1, 00:20:50.117 "bdev_name": "Malloc0", 00:20:50.117 "name": "Malloc0", 00:20:50.117 "nguid": "EBB29F2F59734B8A8AB1AAD56F96DAE2", 00:20:50.117 "uuid": "ebb29f2f-5973-4b8a-8ab1-aad56f96dae2" 00:20:50.117 }, 00:20:50.117 { 00:20:50.117 "nsid": 2, 00:20:50.117 "bdev_name": "Malloc1", 00:20:50.117 "name": "Malloc1", 00:20:50.117 "nguid": "B17C394E021C44F79012035304CD1A34", 00:20:50.117 "uuid": "b17c394e-021c-44f7-9012-035304cd1a34" 00:20:50.117 } 00:20:50.117 ] 00:20:50.117 } 00:20:50.117 ] 00:20:50.117 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.117 00:54:42 -- host/aer.sh@43 -- # wait 2815145 00:20:50.377 Asynchronous Event Request test 00:20:50.378 Attaching to 10.0.0.2 00:20:50.378 Attached to 10.0.0.2 00:20:50.378 Registering asynchronous event callbacks... 00:20:50.378 Starting namespace attribute notice tests for all controllers... 00:20:50.378 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:50.378 aer_cb - Changed Namespace 00:20:50.378 Cleaning up... 00:20:50.378 00:54:42 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:50.378 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.378 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.378 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.378 00:54:42 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:50.378 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.378 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.378 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.378 00:54:42 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:50.378 00:54:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:50.378 00:54:42 -- common/autotest_common.sh@10 -- # set +x 00:20:50.378 00:54:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:50.378 00:54:42 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:50.378 00:54:42 -- host/aer.sh@51 -- # nvmftestfini 00:20:50.378 00:54:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:50.378 00:54:42 -- nvmf/common.sh@117 -- # sync 00:20:50.378 00:54:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:50.378 00:54:42 -- nvmf/common.sh@120 -- # set +e 00:20:50.378 00:54:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:50.378 00:54:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:50.378 rmmod nvme_tcp 00:20:50.378 rmmod nvme_fabrics 00:20:50.378 rmmod nvme_keyring 00:20:50.378 00:54:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:50.378 00:54:43 -- nvmf/common.sh@124 -- # set -e 00:20:50.378 00:54:43 -- nvmf/common.sh@125 -- # return 0 00:20:50.378 00:54:43 -- nvmf/common.sh@478 -- # '[' -n 2814953 ']' 00:20:50.378 00:54:43 -- nvmf/common.sh@479 -- # killprocess 2814953 00:20:50.378 00:54:43 -- common/autotest_common.sh@936 -- # '[' -z 2814953 ']' 00:20:50.378 00:54:43 -- common/autotest_common.sh@940 -- # kill -0 2814953 00:20:50.378 00:54:43 -- common/autotest_common.sh@941 -- # uname 00:20:50.378 00:54:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:50.378 00:54:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2814953 00:20:50.638 00:54:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:50.638 00:54:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:50.638 00:54:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2814953' 00:20:50.638 killing process with pid 2814953 00:20:50.638 00:54:43 -- common/autotest_common.sh@955 -- # kill 2814953 00:20:50.638 [2024-04-27 00:54:43.114352] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:20:50.638 00:54:43 -- common/autotest_common.sh@960 -- # wait 2814953 00:20:50.899 00:54:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:50.899 00:54:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:50.899 00:54:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:50.899 00:54:43 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:50.899 00:54:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:50.899 00:54:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:50.899 00:54:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:50.899 00:54:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.436 00:54:45 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:53.436 00:20:53.436 real 0m10.949s 00:20:53.436 user 0m8.990s 00:20:53.436 sys 0m5.451s 00:20:53.436 00:54:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:53.436 00:54:45 -- common/autotest_common.sh@10 -- # set +x 00:20:53.436 ************************************ 00:20:53.436 END TEST nvmf_aer 00:20:53.436 ************************************ 00:20:53.436 00:54:45 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:53.436 00:54:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:53.436 00:54:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:53.436 00:54:45 -- common/autotest_common.sh@10 -- # set +x 00:20:53.436 ************************************ 00:20:53.436 START TEST nvmf_async_init 00:20:53.436 ************************************ 00:20:53.436 00:54:45 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:53.436 * Looking for test storage... 00:20:53.436 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:20:53.436 00:54:45 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:20:53.436 00:54:45 -- nvmf/common.sh@7 -- # uname -s 00:20:53.436 00:54:45 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:53.436 00:54:45 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:53.436 00:54:45 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:53.436 00:54:45 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:53.436 00:54:45 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:53.436 00:54:45 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:53.436 00:54:45 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:53.436 00:54:45 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:53.436 00:54:45 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:53.436 00:54:45 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:53.436 00:54:45 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:20:53.436 00:54:45 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:20:53.436 00:54:45 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:53.436 00:54:45 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:53.436 00:54:45 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:20:53.436 00:54:45 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:53.436 00:54:45 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:20:53.436 00:54:45 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:53.436 00:54:45 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:53.436 00:54:45 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:53.436 00:54:45 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.437 00:54:45 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.437 00:54:45 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.437 00:54:45 -- paths/export.sh@5 -- # export PATH 00:20:53.437 00:54:45 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:53.437 00:54:45 -- nvmf/common.sh@47 -- # : 0 00:20:53.437 00:54:45 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:53.437 00:54:45 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:53.437 00:54:45 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:53.437 00:54:45 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:53.437 00:54:45 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:53.437 00:54:45 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:53.437 00:54:45 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:53.437 00:54:45 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:53.437 00:54:45 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:53.437 00:54:45 -- host/async_init.sh@14 -- # null_block_size=512 00:20:53.437 00:54:45 -- host/async_init.sh@15 -- # null_bdev=null0 00:20:53.437 00:54:45 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:53.437 00:54:45 -- host/async_init.sh@20 -- # uuidgen 00:20:53.437 00:54:45 -- host/async_init.sh@20 -- # tr -d - 00:20:53.437 00:54:45 -- host/async_init.sh@20 -- # nguid=7db8a2ea831e400bba5fd8ed7d6b5d9d 00:20:53.437 00:54:45 -- host/async_init.sh@22 -- # nvmftestinit 00:20:53.437 00:54:45 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:53.437 00:54:45 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:53.437 00:54:45 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:53.437 00:54:45 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:53.437 00:54:45 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:53.437 00:54:45 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.437 00:54:45 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:53.437 00:54:45 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:53.437 00:54:45 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:20:53.437 00:54:45 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:20:53.437 00:54:45 -- nvmf/common.sh@285 -- # xtrace_disable 00:20:53.437 00:54:45 -- common/autotest_common.sh@10 -- # set +x 00:20:58.705 00:54:51 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:58.705 00:54:51 -- nvmf/common.sh@291 -- # pci_devs=() 00:20:58.705 00:54:51 -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:58.705 00:54:51 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:58.705 00:54:51 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:58.705 00:54:51 -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:58.705 00:54:51 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:58.705 00:54:51 -- nvmf/common.sh@295 -- # net_devs=() 00:20:58.705 00:54:51 -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:58.705 00:54:51 -- nvmf/common.sh@296 -- # e810=() 00:20:58.705 00:54:51 -- nvmf/common.sh@296 -- # local -ga e810 00:20:58.705 00:54:51 -- nvmf/common.sh@297 -- # x722=() 00:20:58.705 00:54:51 -- nvmf/common.sh@297 -- # local -ga x722 00:20:58.705 00:54:51 -- nvmf/common.sh@298 -- # mlx=() 00:20:58.705 00:54:51 -- nvmf/common.sh@298 -- # local -ga mlx 00:20:58.705 00:54:51 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:58.705 00:54:51 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:58.705 00:54:51 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.705 00:54:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:20:58.705 Found 0000:27:00.0 (0x8086 - 0x159b) 00:20:58.705 00:54:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:58.705 00:54:51 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:20:58.705 Found 0000:27:00.1 (0x8086 - 0x159b) 00:20:58.705 00:54:51 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.705 00:54:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.705 00:54:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.705 00:54:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:20:58.705 Found net devices under 0000:27:00.0: cvl_0_0 00:20:58.705 00:54:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.705 00:54:51 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:58.705 00:54:51 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:58.705 00:54:51 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:58.705 00:54:51 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:20:58.705 Found net devices under 0000:27:00.1: cvl_0_1 00:20:58.705 00:54:51 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:20:58.705 00:54:51 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@403 -- # is_hw=yes 00:20:58.705 00:54:51 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:20:58.705 00:54:51 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:20:58.705 00:54:51 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:58.705 00:54:51 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:58.705 00:54:51 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:58.705 00:54:51 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:58.705 00:54:51 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:58.705 00:54:51 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:58.705 00:54:51 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:58.706 00:54:51 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:58.706 00:54:51 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:58.706 00:54:51 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:58.706 00:54:51 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:58.706 00:54:51 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:58.706 00:54:51 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:58.706 00:54:51 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:58.706 00:54:51 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:58.706 00:54:51 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:58.706 00:54:51 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:58.965 00:54:51 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:58.965 00:54:51 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:58.965 00:54:51 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:58.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:58.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:20:58.965 00:20:58.965 --- 10.0.0.2 ping statistics --- 00:20:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.965 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:20:58.965 00:54:51 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:58.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:58.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.078 ms 00:20:58.965 00:20:58.965 --- 10.0.0.1 ping statistics --- 00:20:58.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:58.965 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:58.965 00:54:51 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:58.965 00:54:51 -- nvmf/common.sh@411 -- # return 0 00:20:58.965 00:54:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:58.965 00:54:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:58.965 00:54:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:58.965 00:54:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:58.965 00:54:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:58.965 00:54:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:58.965 00:54:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:58.965 00:54:51 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:58.965 00:54:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:58.965 00:54:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:58.965 00:54:51 -- common/autotest_common.sh@10 -- # set +x 00:20:58.965 00:54:51 -- nvmf/common.sh@470 -- # nvmfpid=2819365 00:20:58.965 00:54:51 -- nvmf/common.sh@471 -- # waitforlisten 2819365 00:20:58.965 00:54:51 -- common/autotest_common.sh@817 -- # '[' -z 2819365 ']' 00:20:58.965 00:54:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:58.965 00:54:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:58.965 00:54:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:58.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:58.965 00:54:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:58.965 00:54:51 -- common/autotest_common.sh@10 -- # set +x 00:20:58.965 00:54:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:58.965 [2024-04-27 00:54:51.556332] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:20:58.965 [2024-04-27 00:54:51.556451] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:58.965 EAL: No free 2048 kB hugepages reported on node 1 00:20:59.227 [2024-04-27 00:54:51.691080] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:59.227 [2024-04-27 00:54:51.782954] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:59.227 [2024-04-27 00:54:51.783012] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:59.227 [2024-04-27 00:54:51.783022] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:59.227 [2024-04-27 00:54:51.783032] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:59.227 [2024-04-27 00:54:51.783041] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:59.227 [2024-04-27 00:54:51.783081] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:59.794 00:54:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:59.794 00:54:52 -- common/autotest_common.sh@850 -- # return 0 00:20:59.794 00:54:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:59.794 00:54:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 00:54:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:59.794 00:54:52 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 [2024-04-27 00:54:52.317392] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 null0 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 7db8a2ea831e400bba5fd8ed7d6b5d9d 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:20:59.794 [2024-04-27 00:54:52.361589] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:59.794 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:59.794 00:54:52 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:59.794 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:59.794 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.052 nvme0n1 00:21:00.052 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.052 00:54:52 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:00.052 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.052 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.052 [ 00:21:00.052 { 00:21:00.052 "name": "nvme0n1", 00:21:00.052 "aliases": [ 00:21:00.052 "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d" 00:21:00.052 ], 00:21:00.052 "product_name": "NVMe disk", 00:21:00.052 "block_size": 512, 00:21:00.052 "num_blocks": 2097152, 00:21:00.053 "uuid": "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d", 00:21:00.053 "assigned_rate_limits": { 00:21:00.053 "rw_ios_per_sec": 0, 00:21:00.053 "rw_mbytes_per_sec": 0, 00:21:00.053 "r_mbytes_per_sec": 0, 00:21:00.053 "w_mbytes_per_sec": 0 00:21:00.053 }, 00:21:00.053 "claimed": false, 00:21:00.053 "zoned": false, 00:21:00.053 "supported_io_types": { 00:21:00.053 "read": true, 00:21:00.053 "write": true, 00:21:00.053 "unmap": false, 00:21:00.053 "write_zeroes": true, 00:21:00.053 "flush": true, 00:21:00.053 "reset": true, 00:21:00.053 "compare": true, 00:21:00.053 "compare_and_write": true, 00:21:00.053 "abort": true, 00:21:00.053 "nvme_admin": true, 00:21:00.053 "nvme_io": true 00:21:00.053 }, 00:21:00.053 "memory_domains": [ 00:21:00.053 { 00:21:00.053 "dma_device_id": "system", 00:21:00.053 "dma_device_type": 1 00:21:00.053 } 00:21:00.053 ], 00:21:00.053 "driver_specific": { 00:21:00.053 "nvme": [ 00:21:00.053 { 00:21:00.053 "trid": { 00:21:00.053 "trtype": "TCP", 00:21:00.053 "adrfam": "IPv4", 00:21:00.053 "traddr": "10.0.0.2", 00:21:00.053 "trsvcid": "4420", 00:21:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:00.053 }, 00:21:00.053 "ctrlr_data": { 00:21:00.053 "cntlid": 1, 00:21:00.053 "vendor_id": "0x8086", 00:21:00.053 "model_number": "SPDK bdev Controller", 00:21:00.053 "serial_number": "00000000000000000000", 00:21:00.053 "firmware_revision": "24.05", 00:21:00.053 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:00.053 "oacs": { 00:21:00.053 "security": 0, 00:21:00.053 "format": 0, 00:21:00.053 "firmware": 0, 00:21:00.053 "ns_manage": 0 00:21:00.053 }, 00:21:00.053 "multi_ctrlr": true, 00:21:00.053 "ana_reporting": false 00:21:00.053 }, 00:21:00.053 "vs": { 00:21:00.053 "nvme_version": "1.3" 00:21:00.053 }, 00:21:00.053 "ns_data": { 00:21:00.053 "id": 1, 00:21:00.053 "can_share": true 00:21:00.053 } 00:21:00.053 } 00:21:00.053 ], 00:21:00.053 "mp_policy": "active_passive" 00:21:00.053 } 00:21:00.053 } 00:21:00.053 ] 00:21:00.053 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.053 00:54:52 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:00.053 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.053 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.053 [2024-04-27 00:54:52.619002] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:00.053 [2024-04-27 00:54:52.619092] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000006840 (9): Bad file descriptor 00:21:00.312 [2024-04-27 00:54:52.761343] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 [ 00:21:00.312 { 00:21:00.312 "name": "nvme0n1", 00:21:00.312 "aliases": [ 00:21:00.312 "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d" 00:21:00.312 ], 00:21:00.312 "product_name": "NVMe disk", 00:21:00.312 "block_size": 512, 00:21:00.312 "num_blocks": 2097152, 00:21:00.312 "uuid": "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d", 00:21:00.312 "assigned_rate_limits": { 00:21:00.312 "rw_ios_per_sec": 0, 00:21:00.312 "rw_mbytes_per_sec": 0, 00:21:00.312 "r_mbytes_per_sec": 0, 00:21:00.312 "w_mbytes_per_sec": 0 00:21:00.312 }, 00:21:00.312 "claimed": false, 00:21:00.312 "zoned": false, 00:21:00.312 "supported_io_types": { 00:21:00.312 "read": true, 00:21:00.312 "write": true, 00:21:00.312 "unmap": false, 00:21:00.312 "write_zeroes": true, 00:21:00.312 "flush": true, 00:21:00.312 "reset": true, 00:21:00.312 "compare": true, 00:21:00.312 "compare_and_write": true, 00:21:00.312 "abort": true, 00:21:00.312 "nvme_admin": true, 00:21:00.312 "nvme_io": true 00:21:00.312 }, 00:21:00.312 "memory_domains": [ 00:21:00.312 { 00:21:00.312 "dma_device_id": "system", 00:21:00.312 "dma_device_type": 1 00:21:00.312 } 00:21:00.312 ], 00:21:00.312 "driver_specific": { 00:21:00.312 "nvme": [ 00:21:00.312 { 00:21:00.312 "trid": { 00:21:00.312 "trtype": "TCP", 00:21:00.312 "adrfam": "IPv4", 00:21:00.312 "traddr": "10.0.0.2", 00:21:00.312 "trsvcid": "4420", 00:21:00.312 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:00.312 }, 00:21:00.312 "ctrlr_data": { 00:21:00.312 "cntlid": 2, 00:21:00.312 "vendor_id": "0x8086", 00:21:00.312 "model_number": "SPDK bdev Controller", 00:21:00.312 "serial_number": "00000000000000000000", 00:21:00.312 "firmware_revision": "24.05", 00:21:00.312 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:00.312 "oacs": { 00:21:00.312 "security": 0, 00:21:00.312 "format": 0, 00:21:00.312 "firmware": 0, 00:21:00.312 "ns_manage": 0 00:21:00.312 }, 00:21:00.312 "multi_ctrlr": true, 00:21:00.312 "ana_reporting": false 00:21:00.312 }, 00:21:00.312 "vs": { 00:21:00.312 "nvme_version": "1.3" 00:21:00.312 }, 00:21:00.312 "ns_data": { 00:21:00.312 "id": 1, 00:21:00.312 "can_share": true 00:21:00.312 } 00:21:00.312 } 00:21:00.312 ], 00:21:00.312 "mp_policy": "active_passive" 00:21:00.312 } 00:21:00.312 } 00:21:00.312 ] 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@53 -- # mktemp 00:21:00.312 00:54:52 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.4Vvc4EnnsT 00:21:00.312 00:54:52 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:00.312 00:54:52 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.4Vvc4EnnsT 00:21:00.312 00:54:52 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 [2024-04-27 00:54:52.815142] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:00.312 [2024-04-27 00:54:52.815308] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4Vvc4EnnsT 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 [2024-04-27 00:54:52.823156] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4Vvc4EnnsT 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 [2024-04-27 00:54:52.831119] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:00.312 [2024-04-27 00:54:52.831192] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:00.312 nvme0n1 00:21:00.312 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.312 00:54:52 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:00.312 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.312 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.312 [ 00:21:00.312 { 00:21:00.312 "name": "nvme0n1", 00:21:00.312 "aliases": [ 00:21:00.312 "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d" 00:21:00.312 ], 00:21:00.312 "product_name": "NVMe disk", 00:21:00.312 "block_size": 512, 00:21:00.312 "num_blocks": 2097152, 00:21:00.312 "uuid": "7db8a2ea-831e-400b-ba5f-d8ed7d6b5d9d", 00:21:00.312 "assigned_rate_limits": { 00:21:00.312 "rw_ios_per_sec": 0, 00:21:00.312 "rw_mbytes_per_sec": 0, 00:21:00.312 "r_mbytes_per_sec": 0, 00:21:00.312 "w_mbytes_per_sec": 0 00:21:00.312 }, 00:21:00.312 "claimed": false, 00:21:00.312 "zoned": false, 00:21:00.312 "supported_io_types": { 00:21:00.312 "read": true, 00:21:00.312 "write": true, 00:21:00.312 "unmap": false, 00:21:00.312 "write_zeroes": true, 00:21:00.312 "flush": true, 00:21:00.312 "reset": true, 00:21:00.312 "compare": true, 00:21:00.312 "compare_and_write": true, 00:21:00.312 "abort": true, 00:21:00.312 "nvme_admin": true, 00:21:00.312 "nvme_io": true 00:21:00.312 }, 00:21:00.312 "memory_domains": [ 00:21:00.312 { 00:21:00.312 "dma_device_id": "system", 00:21:00.312 "dma_device_type": 1 00:21:00.312 } 00:21:00.312 ], 00:21:00.312 "driver_specific": { 00:21:00.313 "nvme": [ 00:21:00.313 { 00:21:00.313 "trid": { 00:21:00.313 "trtype": "TCP", 00:21:00.313 "adrfam": "IPv4", 00:21:00.313 "traddr": "10.0.0.2", 00:21:00.313 "trsvcid": "4421", 00:21:00.313 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:00.313 }, 00:21:00.313 "ctrlr_data": { 00:21:00.313 "cntlid": 3, 00:21:00.313 "vendor_id": "0x8086", 00:21:00.313 "model_number": "SPDK bdev Controller", 00:21:00.313 "serial_number": "00000000000000000000", 00:21:00.313 "firmware_revision": "24.05", 00:21:00.313 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:00.313 "oacs": { 00:21:00.313 "security": 0, 00:21:00.313 "format": 0, 00:21:00.313 "firmware": 0, 00:21:00.313 "ns_manage": 0 00:21:00.313 }, 00:21:00.313 "multi_ctrlr": true, 00:21:00.313 "ana_reporting": false 00:21:00.313 }, 00:21:00.313 "vs": { 00:21:00.313 "nvme_version": "1.3" 00:21:00.313 }, 00:21:00.313 "ns_data": { 00:21:00.313 "id": 1, 00:21:00.313 "can_share": true 00:21:00.313 } 00:21:00.313 } 00:21:00.313 ], 00:21:00.313 "mp_policy": "active_passive" 00:21:00.313 } 00:21:00.313 } 00:21:00.313 ] 00:21:00.313 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.313 00:54:52 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:00.313 00:54:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:00.313 00:54:52 -- common/autotest_common.sh@10 -- # set +x 00:21:00.313 00:54:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:00.313 00:54:52 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.4Vvc4EnnsT 00:21:00.313 00:54:52 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:21:00.313 00:54:52 -- host/async_init.sh@78 -- # nvmftestfini 00:21:00.313 00:54:52 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:00.313 00:54:52 -- nvmf/common.sh@117 -- # sync 00:21:00.313 00:54:52 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.313 00:54:52 -- nvmf/common.sh@120 -- # set +e 00:21:00.313 00:54:52 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.313 00:54:52 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.313 rmmod nvme_tcp 00:21:00.313 rmmod nvme_fabrics 00:21:00.313 rmmod nvme_keyring 00:21:00.571 00:54:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:00.571 00:54:53 -- nvmf/common.sh@124 -- # set -e 00:21:00.571 00:54:53 -- nvmf/common.sh@125 -- # return 0 00:21:00.571 00:54:53 -- nvmf/common.sh@478 -- # '[' -n 2819365 ']' 00:21:00.571 00:54:53 -- nvmf/common.sh@479 -- # killprocess 2819365 00:21:00.571 00:54:53 -- common/autotest_common.sh@936 -- # '[' -z 2819365 ']' 00:21:00.571 00:54:53 -- common/autotest_common.sh@940 -- # kill -0 2819365 00:21:00.571 00:54:53 -- common/autotest_common.sh@941 -- # uname 00:21:00.571 00:54:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:00.571 00:54:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2819365 00:21:00.571 00:54:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:00.571 00:54:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:00.571 00:54:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2819365' 00:21:00.571 killing process with pid 2819365 00:21:00.571 00:54:53 -- common/autotest_common.sh@955 -- # kill 2819365 00:21:00.571 [2024-04-27 00:54:53.058849] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:00.571 [2024-04-27 00:54:53.058889] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:00.571 00:54:53 -- common/autotest_common.sh@960 -- # wait 2819365 00:21:00.831 00:54:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:00.831 00:54:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:00.831 00:54:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:00.831 00:54:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:00.831 00:54:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:00.831 00:54:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:00.831 00:54:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:00.831 00:54:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.437 00:54:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.438 00:21:03.438 real 0m9.800s 00:21:03.438 user 0m3.553s 00:21:03.438 sys 0m4.626s 00:21:03.438 00:54:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.438 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:21:03.438 ************************************ 00:21:03.438 END TEST nvmf_async_init 00:21:03.438 ************************************ 00:21:03.438 00:54:55 -- nvmf/nvmf.sh@92 -- # run_test dma /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:03.438 00:54:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.438 00:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.438 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:21:03.438 ************************************ 00:21:03.438 START TEST dma 00:21:03.438 ************************************ 00:21:03.438 00:54:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:21:03.438 * Looking for test storage... 00:21:03.438 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:03.438 00:54:55 -- host/dma.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.438 00:54:55 -- nvmf/common.sh@7 -- # uname -s 00:21:03.438 00:54:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.438 00:54:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.438 00:54:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.438 00:54:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.438 00:54:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.438 00:54:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.438 00:54:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.438 00:54:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.438 00:54:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.438 00:54:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.438 00:54:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:21:03.438 00:54:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:21:03.438 00:54:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.438 00:54:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.438 00:54:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:03.438 00:54:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.438 00:54:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:03.438 00:54:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.438 00:54:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.438 00:54:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.438 00:54:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:55 -- paths/export.sh@5 -- # export PATH 00:21:03.438 00:54:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:55 -- nvmf/common.sh@47 -- # : 0 00:21:03.438 00:54:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.438 00:54:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.438 00:54:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.438 00:54:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.438 00:54:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.438 00:54:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.438 00:54:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.438 00:54:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.438 00:54:55 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:21:03.438 00:54:55 -- host/dma.sh@13 -- # exit 0 00:21:03.438 00:21:03.438 real 0m0.102s 00:21:03.438 user 0m0.032s 00:21:03.438 sys 0m0.078s 00:21:03.438 00:54:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:03.438 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:21:03.438 ************************************ 00:21:03.438 END TEST dma 00:21:03.438 ************************************ 00:21:03.438 00:54:55 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:03.438 00:54:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:03.438 00:54:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:03.438 00:54:55 -- common/autotest_common.sh@10 -- # set +x 00:21:03.438 ************************************ 00:21:03.438 START TEST nvmf_identify 00:21:03.438 ************************************ 00:21:03.438 00:54:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:03.438 * Looking for test storage... 00:21:03.438 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:03.438 00:54:56 -- host/identify.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.438 00:54:56 -- nvmf/common.sh@7 -- # uname -s 00:21:03.438 00:54:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.438 00:54:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.438 00:54:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.438 00:54:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.438 00:54:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.438 00:54:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.438 00:54:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.438 00:54:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.438 00:54:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.438 00:54:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.438 00:54:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:21:03.438 00:54:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:21:03.438 00:54:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.438 00:54:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.438 00:54:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:03.438 00:54:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.438 00:54:56 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:03.438 00:54:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.438 00:54:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.438 00:54:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.438 00:54:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:56 -- paths/export.sh@5 -- # export PATH 00:21:03.438 00:54:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.438 00:54:56 -- nvmf/common.sh@47 -- # : 0 00:21:03.438 00:54:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.439 00:54:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.439 00:54:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.439 00:54:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.439 00:54:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.439 00:54:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.439 00:54:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.439 00:54:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.439 00:54:56 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.439 00:54:56 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.439 00:54:56 -- host/identify.sh@14 -- # nvmftestinit 00:21:03.439 00:54:56 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:03.439 00:54:56 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.439 00:54:56 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:03.439 00:54:56 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:03.439 00:54:56 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:03.439 00:54:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.439 00:54:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:03.439 00:54:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.439 00:54:56 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:03.439 00:54:56 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:03.439 00:54:56 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.439 00:54:56 -- common/autotest_common.sh@10 -- # set +x 00:21:10.014 00:55:01 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:10.014 00:55:01 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:10.014 00:55:01 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:10.014 00:55:01 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:10.014 00:55:01 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:10.014 00:55:01 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:10.014 00:55:01 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:10.014 00:55:01 -- nvmf/common.sh@295 -- # net_devs=() 00:21:10.014 00:55:01 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:10.014 00:55:01 -- nvmf/common.sh@296 -- # e810=() 00:21:10.015 00:55:01 -- nvmf/common.sh@296 -- # local -ga e810 00:21:10.015 00:55:01 -- nvmf/common.sh@297 -- # x722=() 00:21:10.015 00:55:01 -- nvmf/common.sh@297 -- # local -ga x722 00:21:10.015 00:55:01 -- nvmf/common.sh@298 -- # mlx=() 00:21:10.015 00:55:01 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:10.015 00:55:01 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:10.015 00:55:01 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:10.015 00:55:01 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.015 00:55:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:10.015 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:10.015 00:55:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:10.015 00:55:01 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:10.015 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:10.015 00:55:01 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.015 00:55:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.015 00:55:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.015 00:55:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:10.015 Found net devices under 0000:27:00.0: cvl_0_0 00:21:10.015 00:55:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.015 00:55:01 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:10.015 00:55:01 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:10.015 00:55:01 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:10.015 00:55:01 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:10.015 Found net devices under 0000:27:00.1: cvl_0_1 00:21:10.015 00:55:01 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:10.015 00:55:01 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:10.015 00:55:01 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:10.015 00:55:01 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:10.015 00:55:01 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:10.015 00:55:01 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:10.015 00:55:01 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:10.015 00:55:01 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:10.015 00:55:01 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:10.015 00:55:01 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:10.015 00:55:01 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:10.015 00:55:01 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:10.015 00:55:01 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:10.015 00:55:01 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:10.015 00:55:01 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:10.015 00:55:01 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:10.015 00:55:01 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:10.015 00:55:01 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:10.015 00:55:01 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:10.015 00:55:01 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:10.015 00:55:01 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:10.015 00:55:01 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:10.015 00:55:01 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:10.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:10.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.693 ms 00:21:10.015 00:21:10.015 --- 10.0.0.2 ping statistics --- 00:21:10.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.015 rtt min/avg/max/mdev = 0.693/0.693/0.693/0.000 ms 00:21:10.015 00:55:01 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:10.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:10.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.349 ms 00:21:10.015 00:21:10.015 --- 10.0.0.1 ping statistics --- 00:21:10.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:10.015 rtt min/avg/max/mdev = 0.349/0.349/0.349/0.000 ms 00:21:10.015 00:55:01 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:10.015 00:55:01 -- nvmf/common.sh@411 -- # return 0 00:21:10.015 00:55:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:10.015 00:55:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:10.015 00:55:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:10.015 00:55:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:10.015 00:55:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:10.015 00:55:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:10.015 00:55:01 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:10.015 00:55:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:10.015 00:55:01 -- common/autotest_common.sh@10 -- # set +x 00:21:10.015 00:55:01 -- host/identify.sh@19 -- # nvmfpid=2823754 00:21:10.015 00:55:01 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:10.015 00:55:01 -- host/identify.sh@23 -- # waitforlisten 2823754 00:21:10.015 00:55:01 -- common/autotest_common.sh@817 -- # '[' -z 2823754 ']' 00:21:10.015 00:55:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.015 00:55:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:10.015 00:55:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.015 00:55:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:10.015 00:55:01 -- common/autotest_common.sh@10 -- # set +x 00:21:10.015 00:55:01 -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:10.016 [2024-04-27 00:55:01.734255] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:10.016 [2024-04-27 00:55:01.734352] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.016 [2024-04-27 00:55:01.854378] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:10.016 [2024-04-27 00:55:01.948245] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.016 [2024-04-27 00:55:01.948280] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.016 [2024-04-27 00:55:01.948301] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.016 [2024-04-27 00:55:01.948311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.016 [2024-04-27 00:55:01.948318] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.016 [2024-04-27 00:55:01.948425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.016 [2024-04-27 00:55:01.948505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:10.016 [2024-04-27 00:55:01.948605] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:10.016 [2024-04-27 00:55:01.948616] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:10.016 00:55:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:10.016 00:55:02 -- common/autotest_common.sh@850 -- # return 0 00:21:10.016 00:55:02 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 [2024-04-27 00:55:02.444237] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:10.016 00:55:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 00:55:02 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 Malloc0 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 [2024-04-27 00:55:02.545235] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:10.016 00:55:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.016 00:55:02 -- common/autotest_common.sh@10 -- # set +x 00:21:10.016 [2024-04-27 00:55:02.560953] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:21:10.016 [ 00:21:10.016 { 00:21:10.016 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:10.016 "subtype": "Discovery", 00:21:10.016 "listen_addresses": [ 00:21:10.016 { 00:21:10.016 "transport": "TCP", 00:21:10.016 "trtype": "TCP", 00:21:10.016 "adrfam": "IPv4", 00:21:10.016 "traddr": "10.0.0.2", 00:21:10.016 "trsvcid": "4420" 00:21:10.016 } 00:21:10.016 ], 00:21:10.016 "allow_any_host": true, 00:21:10.016 "hosts": [] 00:21:10.016 }, 00:21:10.016 { 00:21:10.016 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.016 "subtype": "NVMe", 00:21:10.016 "listen_addresses": [ 00:21:10.016 { 00:21:10.016 "transport": "TCP", 00:21:10.016 "trtype": "TCP", 00:21:10.016 "adrfam": "IPv4", 00:21:10.016 "traddr": "10.0.0.2", 00:21:10.016 "trsvcid": "4420" 00:21:10.016 } 00:21:10.016 ], 00:21:10.016 "allow_any_host": true, 00:21:10.016 "hosts": [], 00:21:10.016 "serial_number": "SPDK00000000000001", 00:21:10.016 "model_number": "SPDK bdev Controller", 00:21:10.016 "max_namespaces": 32, 00:21:10.016 "min_cntlid": 1, 00:21:10.016 "max_cntlid": 65519, 00:21:10.016 "namespaces": [ 00:21:10.016 { 00:21:10.016 "nsid": 1, 00:21:10.016 "bdev_name": "Malloc0", 00:21:10.016 "name": "Malloc0", 00:21:10.016 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:10.016 "eui64": "ABCDEF0123456789", 00:21:10.016 "uuid": "ba459928-1120-46da-accb-864eadff9a44" 00:21:10.016 } 00:21:10.016 ] 00:21:10.016 } 00:21:10.016 ] 00:21:10.016 00:55:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.016 00:55:02 -- host/identify.sh@39 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:10.016 [2024-04-27 00:55:02.612787] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:10.016 [2024-04-27 00:55:02.612887] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824065 ] 00:21:10.016 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.016 [2024-04-27 00:55:02.667418] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:21:10.016 [2024-04-27 00:55:02.667505] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:10.016 [2024-04-27 00:55:02.667515] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:10.016 [2024-04-27 00:55:02.667537] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:10.016 [2024-04-27 00:55:02.667552] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:10.016 [2024-04-27 00:55:02.671273] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:21:10.016 [2024-04-27 00:55:02.671327] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:21:10.016 [2024-04-27 00:55:02.679234] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:10.016 [2024-04-27 00:55:02.679255] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:10.016 [2024-04-27 00:55:02.679263] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:10.016 [2024-04-27 00:55:02.679268] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:10.016 [2024-04-27 00:55:02.679322] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.016 [2024-04-27 00:55:02.679334] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.016 [2024-04-27 00:55:02.679341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.016 [2024-04-27 00:55:02.679372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:10.016 [2024-04-27 00:55:02.679396] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.016 [2024-04-27 00:55:02.687238] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.016 [2024-04-27 00:55:02.687256] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.016 [2024-04-27 00:55:02.687262] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.016 [2024-04-27 00:55:02.687270] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.016 [2024-04-27 00:55:02.687285] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:10.017 [2024-04-27 00:55:02.687299] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:21:10.017 [2024-04-27 00:55:02.687308] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:21:10.017 [2024-04-27 00:55:02.687332] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687345] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.687361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.687380] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.687521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.687529] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.687539] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687545] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.687555] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:21:10.017 [2024-04-27 00:55:02.687568] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:21:10.017 [2024-04-27 00:55:02.687577] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687583] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687588] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.687600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.687612] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.687720] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.687728] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.687732] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687737] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.687744] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:21:10.017 [2024-04-27 00:55:02.687754] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.687762] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687769] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687774] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.687785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.687798] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.687906] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.687912] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.687917] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687921] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.687928] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.687938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687945] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.687950] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.687959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.687972] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.688075] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.688082] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.688086] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688090] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.688098] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:21:10.017 [2024-04-27 00:55:02.688109] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.688117] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.688224] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:21:10.017 [2024-04-27 00:55:02.688231] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.688244] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688254] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688260] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.688269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.688282] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.688400] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.688406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.688410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688415] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.688421] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:10.017 [2024-04-27 00:55:02.688430] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688436] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688441] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.688451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.017 [2024-04-27 00:55:02.688465] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.017 [2024-04-27 00:55:02.688582] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.017 [2024-04-27 00:55:02.688588] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.017 [2024-04-27 00:55:02.688592] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.017 [2024-04-27 00:55:02.688603] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:10.017 [2024-04-27 00:55:02.688611] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:21:10.017 [2024-04-27 00:55:02.688620] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:21:10.017 [2024-04-27 00:55:02.688628] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:21:10.017 [2024-04-27 00:55:02.688644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.017 [2024-04-27 00:55:02.688649] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.017 [2024-04-27 00:55:02.688659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.018 [2024-04-27 00:55:02.688670] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.018 [2024-04-27 00:55:02.688844] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.018 [2024-04-27 00:55:02.688850] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.018 [2024-04-27 00:55:02.688856] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.018 [2024-04-27 00:55:02.688863] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:21:10.018 [2024-04-27 00:55:02.688869] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.018 [2024-04-27 00:55:02.688876] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.018 [2024-04-27 00:55:02.688891] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.018 [2024-04-27 00:55:02.688897] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.282 [2024-04-27 00:55:02.730547] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.282 [2024-04-27 00:55:02.730552] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730558] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.282 [2024-04-27 00:55:02.730575] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:21:10.282 [2024-04-27 00:55:02.730586] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:21:10.282 [2024-04-27 00:55:02.730593] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:21:10.282 [2024-04-27 00:55:02.730603] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:21:10.282 [2024-04-27 00:55:02.730609] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:21:10.282 [2024-04-27 00:55:02.730616] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:21:10.282 [2024-04-27 00:55:02.730629] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:21:10.282 [2024-04-27 00:55:02.730641] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730647] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730653] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730670] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.282 [2024-04-27 00:55:02.730685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.282 [2024-04-27 00:55:02.730818] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.282 [2024-04-27 00:55:02.730825] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.282 [2024-04-27 00:55:02.730829] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730833] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.282 [2024-04-27 00:55:02.730843] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730848] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730853] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.282 [2024-04-27 00:55:02.730870] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730874] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730879] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.282 [2024-04-27 00:55:02.730891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730896] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730900] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730907] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.282 [2024-04-27 00:55:02.730913] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730917] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730922] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730929] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.282 [2024-04-27 00:55:02.730934] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:21:10.282 [2024-04-27 00:55:02.730944] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:10.282 [2024-04-27 00:55:02.730952] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.282 [2024-04-27 00:55:02.730957] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.282 [2024-04-27 00:55:02.730968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.282 [2024-04-27 00:55:02.730981] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.283 [2024-04-27 00:55:02.730988] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:21:10.283 [2024-04-27 00:55:02.730992] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:21:10.283 [2024-04-27 00:55:02.730997] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.283 [2024-04-27 00:55:02.731002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.283 [2024-04-27 00:55:02.731203] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.731209] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.731213] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.731217] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.283 [2024-04-27 00:55:02.735233] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:21:10.283 [2024-04-27 00:55:02.735243] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:21:10.283 [2024-04-27 00:55:02.735259] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735265] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.283 [2024-04-27 00:55:02.735279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.283 [2024-04-27 00:55:02.735292] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.283 [2024-04-27 00:55:02.735448] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.283 [2024-04-27 00:55:02.735455] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.283 [2024-04-27 00:55:02.735462] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735467] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:10.283 [2024-04-27 00:55:02.735473] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.283 [2024-04-27 00:55:02.735480] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735489] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735494] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735521] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.735528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.735532] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735537] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.283 [2024-04-27 00:55:02.735556] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:21:10.283 [2024-04-27 00:55:02.735600] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735605] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.283 [2024-04-27 00:55:02.735617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.283 [2024-04-27 00:55:02.735626] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735631] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735636] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.283 [2024-04-27 00:55:02.735645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.283 [2024-04-27 00:55:02.735660] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.283 [2024-04-27 00:55:02.735666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.283 [2024-04-27 00:55:02.735928] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.283 [2024-04-27 00:55:02.735935] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.283 [2024-04-27 00:55:02.735940] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735945] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=1024, cccid=4 00:21:10.283 [2024-04-27 00:55:02.735952] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=1024 00:21:10.283 [2024-04-27 00:55:02.735958] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735965] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735970] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.735985] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.735989] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.735994] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.283 [2024-04-27 00:55:02.777567] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.777581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.777585] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777591] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.283 [2024-04-27 00:55:02.777614] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777619] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.283 [2024-04-27 00:55:02.777635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.283 [2024-04-27 00:55:02.777651] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.283 [2024-04-27 00:55:02.777811] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.283 [2024-04-27 00:55:02.777817] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.283 [2024-04-27 00:55:02.777821] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777826] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=3072, cccid=4 00:21:10.283 [2024-04-27 00:55:02.777831] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=3072 00:21:10.283 [2024-04-27 00:55:02.777836] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777843] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777848] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777879] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.777885] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.777889] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777893] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.283 [2024-04-27 00:55:02.777903] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.777911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.283 [2024-04-27 00:55:02.777922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.283 [2024-04-27 00:55:02.777934] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.283 [2024-04-27 00:55:02.778068] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.283 [2024-04-27 00:55:02.778075] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.283 [2024-04-27 00:55:02.778078] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.778083] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8, cccid=4 00:21:10.283 [2024-04-27 00:55:02.778088] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=8 00:21:10.283 [2024-04-27 00:55:02.778092] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.778099] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.778103] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.823233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.283 [2024-04-27 00:55:02.823250] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.283 [2024-04-27 00:55:02.823255] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.283 [2024-04-27 00:55:02.823260] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.283 ===================================================== 00:21:10.283 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:10.283 ===================================================== 00:21:10.283 Controller Capabilities/Features 00:21:10.283 ================================ 00:21:10.283 Vendor ID: 0000 00:21:10.283 Subsystem Vendor ID: 0000 00:21:10.283 Serial Number: .................... 00:21:10.283 Model Number: ........................................ 00:21:10.283 Firmware Version: 24.05 00:21:10.283 Recommended Arb Burst: 0 00:21:10.283 IEEE OUI Identifier: 00 00 00 00:21:10.283 Multi-path I/O 00:21:10.283 May have multiple subsystem ports: No 00:21:10.283 May have multiple controllers: No 00:21:10.283 Associated with SR-IOV VF: No 00:21:10.283 Max Data Transfer Size: 131072 00:21:10.283 Max Number of Namespaces: 0 00:21:10.283 Max Number of I/O Queues: 1024 00:21:10.283 NVMe Specification Version (VS): 1.3 00:21:10.283 NVMe Specification Version (Identify): 1.3 00:21:10.283 Maximum Queue Entries: 128 00:21:10.283 Contiguous Queues Required: Yes 00:21:10.283 Arbitration Mechanisms Supported 00:21:10.283 Weighted Round Robin: Not Supported 00:21:10.283 Vendor Specific: Not Supported 00:21:10.283 Reset Timeout: 15000 ms 00:21:10.283 Doorbell Stride: 4 bytes 00:21:10.283 NVM Subsystem Reset: Not Supported 00:21:10.283 Command Sets Supported 00:21:10.283 NVM Command Set: Supported 00:21:10.283 Boot Partition: Not Supported 00:21:10.283 Memory Page Size Minimum: 4096 bytes 00:21:10.284 Memory Page Size Maximum: 4096 bytes 00:21:10.284 Persistent Memory Region: Not Supported 00:21:10.284 Optional Asynchronous Events Supported 00:21:10.284 Namespace Attribute Notices: Not Supported 00:21:10.284 Firmware Activation Notices: Not Supported 00:21:10.284 ANA Change Notices: Not Supported 00:21:10.284 PLE Aggregate Log Change Notices: Not Supported 00:21:10.284 LBA Status Info Alert Notices: Not Supported 00:21:10.284 EGE Aggregate Log Change Notices: Not Supported 00:21:10.284 Normal NVM Subsystem Shutdown event: Not Supported 00:21:10.284 Zone Descriptor Change Notices: Not Supported 00:21:10.284 Discovery Log Change Notices: Supported 00:21:10.284 Controller Attributes 00:21:10.284 128-bit Host Identifier: Not Supported 00:21:10.284 Non-Operational Permissive Mode: Not Supported 00:21:10.284 NVM Sets: Not Supported 00:21:10.284 Read Recovery Levels: Not Supported 00:21:10.284 Endurance Groups: Not Supported 00:21:10.284 Predictable Latency Mode: Not Supported 00:21:10.284 Traffic Based Keep ALive: Not Supported 00:21:10.284 Namespace Granularity: Not Supported 00:21:10.284 SQ Associations: Not Supported 00:21:10.284 UUID List: Not Supported 00:21:10.284 Multi-Domain Subsystem: Not Supported 00:21:10.284 Fixed Capacity Management: Not Supported 00:21:10.284 Variable Capacity Management: Not Supported 00:21:10.284 Delete Endurance Group: Not Supported 00:21:10.284 Delete NVM Set: Not Supported 00:21:10.284 Extended LBA Formats Supported: Not Supported 00:21:10.284 Flexible Data Placement Supported: Not Supported 00:21:10.284 00:21:10.284 Controller Memory Buffer Support 00:21:10.284 ================================ 00:21:10.284 Supported: No 00:21:10.284 00:21:10.284 Persistent Memory Region Support 00:21:10.284 ================================ 00:21:10.284 Supported: No 00:21:10.284 00:21:10.284 Admin Command Set Attributes 00:21:10.284 ============================ 00:21:10.284 Security Send/Receive: Not Supported 00:21:10.284 Format NVM: Not Supported 00:21:10.284 Firmware Activate/Download: Not Supported 00:21:10.284 Namespace Management: Not Supported 00:21:10.284 Device Self-Test: Not Supported 00:21:10.284 Directives: Not Supported 00:21:10.284 NVMe-MI: Not Supported 00:21:10.284 Virtualization Management: Not Supported 00:21:10.284 Doorbell Buffer Config: Not Supported 00:21:10.284 Get LBA Status Capability: Not Supported 00:21:10.284 Command & Feature Lockdown Capability: Not Supported 00:21:10.284 Abort Command Limit: 1 00:21:10.284 Async Event Request Limit: 4 00:21:10.284 Number of Firmware Slots: N/A 00:21:10.284 Firmware Slot 1 Read-Only: N/A 00:21:10.284 Firmware Activation Without Reset: N/A 00:21:10.284 Multiple Update Detection Support: N/A 00:21:10.284 Firmware Update Granularity: No Information Provided 00:21:10.284 Per-Namespace SMART Log: No 00:21:10.284 Asymmetric Namespace Access Log Page: Not Supported 00:21:10.284 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:10.284 Command Effects Log Page: Not Supported 00:21:10.284 Get Log Page Extended Data: Supported 00:21:10.284 Telemetry Log Pages: Not Supported 00:21:10.284 Persistent Event Log Pages: Not Supported 00:21:10.284 Supported Log Pages Log Page: May Support 00:21:10.284 Commands Supported & Effects Log Page: Not Supported 00:21:10.284 Feature Identifiers & Effects Log Page:May Support 00:21:10.284 NVMe-MI Commands & Effects Log Page: May Support 00:21:10.284 Data Area 4 for Telemetry Log: Not Supported 00:21:10.284 Error Log Page Entries Supported: 128 00:21:10.284 Keep Alive: Not Supported 00:21:10.284 00:21:10.284 NVM Command Set Attributes 00:21:10.284 ========================== 00:21:10.284 Submission Queue Entry Size 00:21:10.284 Max: 1 00:21:10.284 Min: 1 00:21:10.284 Completion Queue Entry Size 00:21:10.284 Max: 1 00:21:10.284 Min: 1 00:21:10.284 Number of Namespaces: 0 00:21:10.284 Compare Command: Not Supported 00:21:10.284 Write Uncorrectable Command: Not Supported 00:21:10.284 Dataset Management Command: Not Supported 00:21:10.284 Write Zeroes Command: Not Supported 00:21:10.284 Set Features Save Field: Not Supported 00:21:10.284 Reservations: Not Supported 00:21:10.284 Timestamp: Not Supported 00:21:10.284 Copy: Not Supported 00:21:10.284 Volatile Write Cache: Not Present 00:21:10.284 Atomic Write Unit (Normal): 1 00:21:10.284 Atomic Write Unit (PFail): 1 00:21:10.284 Atomic Compare & Write Unit: 1 00:21:10.284 Fused Compare & Write: Supported 00:21:10.284 Scatter-Gather List 00:21:10.284 SGL Command Set: Supported 00:21:10.284 SGL Keyed: Supported 00:21:10.284 SGL Bit Bucket Descriptor: Not Supported 00:21:10.284 SGL Metadata Pointer: Not Supported 00:21:10.284 Oversized SGL: Not Supported 00:21:10.284 SGL Metadata Address: Not Supported 00:21:10.284 SGL Offset: Supported 00:21:10.284 Transport SGL Data Block: Not Supported 00:21:10.284 Replay Protected Memory Block: Not Supported 00:21:10.284 00:21:10.284 Firmware Slot Information 00:21:10.284 ========================= 00:21:10.284 Active slot: 0 00:21:10.284 00:21:10.284 00:21:10.284 Error Log 00:21:10.284 ========= 00:21:10.284 00:21:10.284 Active Namespaces 00:21:10.284 ================= 00:21:10.284 Discovery Log Page 00:21:10.284 ================== 00:21:10.284 Generation Counter: 2 00:21:10.284 Number of Records: 2 00:21:10.284 Record Format: 0 00:21:10.284 00:21:10.284 Discovery Log Entry 0 00:21:10.284 ---------------------- 00:21:10.284 Transport Type: 3 (TCP) 00:21:10.284 Address Family: 1 (IPv4) 00:21:10.284 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:10.284 Entry Flags: 00:21:10.284 Duplicate Returned Information: 1 00:21:10.284 Explicit Persistent Connection Support for Discovery: 1 00:21:10.284 Transport Requirements: 00:21:10.284 Secure Channel: Not Required 00:21:10.284 Port ID: 0 (0x0000) 00:21:10.284 Controller ID: 65535 (0xffff) 00:21:10.284 Admin Max SQ Size: 128 00:21:10.284 Transport Service Identifier: 4420 00:21:10.284 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:10.284 Transport Address: 10.0.0.2 00:21:10.284 Discovery Log Entry 1 00:21:10.284 ---------------------- 00:21:10.284 Transport Type: 3 (TCP) 00:21:10.284 Address Family: 1 (IPv4) 00:21:10.284 Subsystem Type: 2 (NVM Subsystem) 00:21:10.284 Entry Flags: 00:21:10.284 Duplicate Returned Information: 0 00:21:10.284 Explicit Persistent Connection Support for Discovery: 0 00:21:10.284 Transport Requirements: 00:21:10.284 Secure Channel: Not Required 00:21:10.284 Port ID: 0 (0x0000) 00:21:10.284 Controller ID: 65535 (0xffff) 00:21:10.284 Admin Max SQ Size: 128 00:21:10.284 Transport Service Identifier: 4420 00:21:10.284 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:10.284 Transport Address: 10.0.0.2 [2024-04-27 00:55:02.823387] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:21:10.284 [2024-04-27 00:55:02.823404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.284 [2024-04-27 00:55:02.823413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.284 [2024-04-27 00:55:02.823420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.284 [2024-04-27 00:55:02.823426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.284 [2024-04-27 00:55:02.823440] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.284 [2024-04-27 00:55:02.823446] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.284 [2024-04-27 00:55:02.823451] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.284 [2024-04-27 00:55:02.823462] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.284 [2024-04-27 00:55:02.823481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.284 [2024-04-27 00:55:02.823602] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.284 [2024-04-27 00:55:02.823609] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.284 [2024-04-27 00:55:02.823613] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.284 [2024-04-27 00:55:02.823618] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.284 [2024-04-27 00:55:02.823630] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.284 [2024-04-27 00:55:02.823635] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.284 [2024-04-27 00:55:02.823640] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.284 [2024-04-27 00:55:02.823651] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.284 [2024-04-27 00:55:02.823666] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.284 [2024-04-27 00:55:02.823801] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.284 [2024-04-27 00:55:02.823809] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.284 [2024-04-27 00:55:02.823813] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.823817] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.823825] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:21:10.285 [2024-04-27 00:55:02.823831] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:21:10.285 [2024-04-27 00:55:02.823842] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.823847] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.823852] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.823862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.823872] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.823988] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.823994] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.823998] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824018] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824022] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824030] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824167] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824171] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824175] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824184] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824188] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824193] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824200] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824210] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824321] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824327] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824331] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824336] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824345] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824349] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824353] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824377] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824485] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824492] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824496] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824500] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824509] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824513] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824518] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824525] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824535] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824640] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824647] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824650] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824655] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824664] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824668] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824672] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824680] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824690] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824832] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824838] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824842] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824846] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.824856] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.824864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.824872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.824882] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.824989] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.824995] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.824999] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825003] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.825013] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825017] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825021] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.825031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.825044] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.825150] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.825157] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.825161] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825165] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.825174] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825178] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825182] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.825190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.825200] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.825312] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.825318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.825322] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.825335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825339] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825343] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.825352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.825361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.825476] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.825482] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.825486] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825490] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.825499] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825503] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825508] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.285 [2024-04-27 00:55:02.825515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.285 [2024-04-27 00:55:02.825525] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.285 [2024-04-27 00:55:02.825634] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.285 [2024-04-27 00:55:02.825640] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.285 [2024-04-27 00:55:02.825644] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825648] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.285 [2024-04-27 00:55:02.825658] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.285 [2024-04-27 00:55:02.825662] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825666] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.825674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.825685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.825798] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.825804] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.825808] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825812] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.825822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.825838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.825847] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.825952] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.825958] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.825962] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825966] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.825975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825980] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.825984] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.825992] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826002] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826121] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826127] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826131] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826135] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826145] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826149] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826153] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826161] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826170] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826277] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826287] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826291] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826301] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826305] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826309] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826443] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826449] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826458] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826467] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826471] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826475] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826492] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826599] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826605] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826609] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826613] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826622] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826626] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826631] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826639] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826648] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826756] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826762] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826766] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826770] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826779] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826784] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826788] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826796] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826806] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.826914] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.826920] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.826924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826928] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.826938] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826942] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.826946] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.826954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.826965] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.827070] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.827076] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.827080] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.827085] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.827094] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.827098] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.827102] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.827110] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.827120] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.831229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.831237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.831241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.831245] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.831255] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.831260] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.831264] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.286 [2024-04-27 00:55:02.831273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.286 [2024-04-27 00:55:02.831284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.286 [2024-04-27 00:55:02.831387] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.286 [2024-04-27 00:55:02.831393] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.286 [2024-04-27 00:55:02.831397] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.286 [2024-04-27 00:55:02.831401] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.286 [2024-04-27 00:55:02.831409] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:21:10.286 00:21:10.286 00:55:02 -- host/identify.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:10.286 [2024-04-27 00:55:02.915486] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:10.286 [2024-04-27 00:55:02.915583] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2824068 ] 00:21:10.287 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.287 [2024-04-27 00:55:02.970164] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:21:10.287 [2024-04-27 00:55:02.970253] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:10.287 [2024-04-27 00:55:02.970261] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:10.287 [2024-04-27 00:55:02.970284] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:10.287 [2024-04-27 00:55:02.970298] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:10.287 [2024-04-27 00:55:02.970761] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:21:10.287 [2024-04-27 00:55:02.970794] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x614000002040 0 00:21:10.551 [2024-04-27 00:55:02.985231] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:10.551 [2024-04-27 00:55:02.985249] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:10.552 [2024-04-27 00:55:02.985255] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:10.552 [2024-04-27 00:55:02.985261] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:10.552 [2024-04-27 00:55:02.985301] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.985310] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.985317] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.985339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:10.552 [2024-04-27 00:55:02.985361] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.993233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.993247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.993252] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993259] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.993273] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:10.552 [2024-04-27 00:55:02.993286] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:21:10.552 [2024-04-27 00:55:02.993294] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:21:10.552 [2024-04-27 00:55:02.993310] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993317] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.993341] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.993359] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.993468] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.993475] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.993486] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993491] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.993499] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:21:10.552 [2024-04-27 00:55:02.993507] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:21:10.552 [2024-04-27 00:55:02.993517] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993524] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993530] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.993541] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.993554] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.993638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.993646] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.993650] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993654] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.993660] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:21:10.552 [2024-04-27 00:55:02.993670] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.993682] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993687] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993692] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.993701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.993713] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.993805] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.993812] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.993816] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993821] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.993827] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.993838] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993844] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993849] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.993858] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.993869] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.993955] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.993961] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.993965] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.993969] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.993975] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:21:10.552 [2024-04-27 00:55:02.993981] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.993992] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.994101] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:21:10.552 [2024-04-27 00:55:02.994108] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.994118] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.994123] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.994129] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.994139] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.994149] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.552 [2024-04-27 00:55:02.994239] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.552 [2024-04-27 00:55:02.994245] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.552 [2024-04-27 00:55:02.994249] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.994254] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.552 [2024-04-27 00:55:02.994260] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:10.552 [2024-04-27 00:55:02.994270] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.994276] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.552 [2024-04-27 00:55:02.994281] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.552 [2024-04-27 00:55:02.994291] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.552 [2024-04-27 00:55:02.994302] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.553 [2024-04-27 00:55:02.994393] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.553 [2024-04-27 00:55:02.994399] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.553 [2024-04-27 00:55:02.994403] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994408] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.553 [2024-04-27 00:55:02.994415] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:10.553 [2024-04-27 00:55:02.994421] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.994432] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:21:10.553 [2024-04-27 00:55:02.994444] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.994456] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994462] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.553 [2024-04-27 00:55:02.994481] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.553 [2024-04-27 00:55:02.994612] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.553 [2024-04-27 00:55:02.994618] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.553 [2024-04-27 00:55:02.994622] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994628] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=0 00:21:10.553 [2024-04-27 00:55:02.994635] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b100) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.553 [2024-04-27 00:55:02.994641] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994651] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994657] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994679] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.553 [2024-04-27 00:55:02.994685] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.553 [2024-04-27 00:55:02.994689] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994693] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.553 [2024-04-27 00:55:02.994705] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:21:10.553 [2024-04-27 00:55:02.994712] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:21:10.553 [2024-04-27 00:55:02.994718] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:21:10.553 [2024-04-27 00:55:02.994726] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:21:10.553 [2024-04-27 00:55:02.994732] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:21:10.553 [2024-04-27 00:55:02.994738] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.994748] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.994759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994765] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994770] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994779] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.553 [2024-04-27 00:55:02.994790] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.553 [2024-04-27 00:55:02.994876] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.553 [2024-04-27 00:55:02.994883] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.553 [2024-04-27 00:55:02.994887] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994891] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b100) on tqpair=0x614000002040 00:21:10.553 [2024-04-27 00:55:02.994899] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994905] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994911] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994921] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.553 [2024-04-27 00:55:02.994930] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994935] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994939] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994946] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.553 [2024-04-27 00:55:02.994953] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994957] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994961] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.553 [2024-04-27 00:55:02.994975] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994981] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.994985] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.994992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.553 [2024-04-27 00:55:02.994998] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.995007] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.995014] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.995020] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.995029] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.553 [2024-04-27 00:55:02.995041] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b100, cid 0, qid 0 00:21:10.553 [2024-04-27 00:55:02.995046] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b260, cid 1, qid 0 00:21:10.553 [2024-04-27 00:55:02.995051] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b3c0, cid 2, qid 0 00:21:10.553 [2024-04-27 00:55:02.995056] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.553 [2024-04-27 00:55:02.995061] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.553 [2024-04-27 00:55:02.995178] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.553 [2024-04-27 00:55:02.995185] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.553 [2024-04-27 00:55:02.995188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.995192] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.553 [2024-04-27 00:55:02.995200] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:21:10.553 [2024-04-27 00:55:02.995207] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.995216] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.995227] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:10.553 [2024-04-27 00:55:02.995240] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.995246] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.553 [2024-04-27 00:55:02.995251] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.553 [2024-04-27 00:55:02.995260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:10.553 [2024-04-27 00:55:02.995270] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.553 [2024-04-27 00:55:02.995356] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.554 [2024-04-27 00:55:02.995362] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.554 [2024-04-27 00:55:02.995366] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995370] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.554 [2024-04-27 00:55:02.995423] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.995435] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.995447] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995453] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.554 [2024-04-27 00:55:02.995461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.554 [2024-04-27 00:55:02.995472] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.554 [2024-04-27 00:55:02.995575] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.554 [2024-04-27 00:55:02.995581] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.554 [2024-04-27 00:55:02.995585] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995590] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:10.554 [2024-04-27 00:55:02.995595] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.554 [2024-04-27 00:55:02.995600] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995613] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995617] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995667] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.554 [2024-04-27 00:55:02.995673] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.554 [2024-04-27 00:55:02.995676] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995681] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.554 [2024-04-27 00:55:02.995698] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:21:10.554 [2024-04-27 00:55:02.995712] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.995723] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.995732] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995738] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.554 [2024-04-27 00:55:02.995747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.554 [2024-04-27 00:55:02.995759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.554 [2024-04-27 00:55:02.995865] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.554 [2024-04-27 00:55:02.995874] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.554 [2024-04-27 00:55:02.995878] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995882] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:10.554 [2024-04-27 00:55:02.995887] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.554 [2024-04-27 00:55:02.995891] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995904] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995908] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995967] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.554 [2024-04-27 00:55:02.995974] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.554 [2024-04-27 00:55:02.995978] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.995983] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.554 [2024-04-27 00:55:02.996003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996013] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996022] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996028] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.554 [2024-04-27 00:55:02.996037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.554 [2024-04-27 00:55:02.996047] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.554 [2024-04-27 00:55:02.996140] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.554 [2024-04-27 00:55:02.996147] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.554 [2024-04-27 00:55:02.996151] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996155] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=4 00:21:10.554 [2024-04-27 00:55:02.996160] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.554 [2024-04-27 00:55:02.996165] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996175] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996179] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996229] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.554 [2024-04-27 00:55:02.996235] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.554 [2024-04-27 00:55:02.996239] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996243] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.554 [2024-04-27 00:55:02.996256] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996264] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996274] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996281] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996288] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996295] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:21:10.554 [2024-04-27 00:55:02.996301] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:21:10.554 [2024-04-27 00:55:02.996309] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:21:10.554 [2024-04-27 00:55:02.996330] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996336] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.554 [2024-04-27 00:55:02.996344] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.554 [2024-04-27 00:55:02.996353] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996359] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.554 [2024-04-27 00:55:02.996365] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.554 [2024-04-27 00:55:02.996375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:10.554 [2024-04-27 00:55:02.996386] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.554 [2024-04-27 00:55:02.996392] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.554 [2024-04-27 00:55:02.996494] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.554 [2024-04-27 00:55:02.996501] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.555 [2024-04-27 00:55:02.996506] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996511] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.555 [2024-04-27 00:55:02.996520] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.555 [2024-04-27 00:55:02.996528] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.555 [2024-04-27 00:55:02.996531] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996536] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.555 [2024-04-27 00:55:02.996544] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996549] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.996557] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.996567] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.555 [2024-04-27 00:55:02.996653] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.555 [2024-04-27 00:55:02.996659] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.555 [2024-04-27 00:55:02.996663] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996667] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.555 [2024-04-27 00:55:02.996676] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996680] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.996688] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.996697] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.555 [2024-04-27 00:55:02.996779] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.555 [2024-04-27 00:55:02.996786] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.555 [2024-04-27 00:55:02.996789] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996793] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.555 [2024-04-27 00:55:02.996801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996808] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.996816] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.996825] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.555 [2024-04-27 00:55:02.996912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.555 [2024-04-27 00:55:02.996918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.555 [2024-04-27 00:55:02.996924] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996929] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.555 [2024-04-27 00:55:02.996943] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996949] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.996959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.996968] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996974] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.996984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.996993] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.996998] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.997006] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.997016] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:02.997021] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:21:10.555 [2024-04-27 00:55:02.997029] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.555 [2024-04-27 00:55:02.997040] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b7e0, cid 5, qid 0 00:21:10.555 [2024-04-27 00:55:02.997047] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b680, cid 4, qid 0 00:21:10.555 [2024-04-27 00:55:02.997052] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b940, cid 6, qid 0 00:21:10.555 [2024-04-27 00:55:02.997060] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:21:10.555 [2024-04-27 00:55:02.997208] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.555 [2024-04-27 00:55:02.997215] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.555 [2024-04-27 00:55:03.001225] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001232] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=8192, cccid=5 00:21:10.555 [2024-04-27 00:55:03.001238] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b7e0) on tqpair(0x614000002040): expected_datao=0, payload_size=8192 00:21:10.555 [2024-04-27 00:55:03.001244] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001259] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001264] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001276] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.555 [2024-04-27 00:55:03.001283] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.555 [2024-04-27 00:55:03.001287] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001291] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=4 00:21:10.555 [2024-04-27 00:55:03.001297] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b680) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:21:10.555 [2024-04-27 00:55:03.001301] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001309] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001313] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001319] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.555 [2024-04-27 00:55:03.001328] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.555 [2024-04-27 00:55:03.001332] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001337] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=512, cccid=6 00:21:10.555 [2024-04-27 00:55:03.001343] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001b940) on tqpair(0x614000002040): expected_datao=0, payload_size=512 00:21:10.555 [2024-04-27 00:55:03.001347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001354] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001358] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:10.555 [2024-04-27 00:55:03.001370] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:10.555 [2024-04-27 00:55:03.001374] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001378] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x614000002040): datao=0, datal=4096, cccid=7 00:21:10.555 [2024-04-27 00:55:03.001383] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x62600001baa0) on tqpair(0x614000002040): expected_datao=0, payload_size=4096 00:21:10.555 [2024-04-27 00:55:03.001387] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.555 [2024-04-27 00:55:03.001395] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001399] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001405] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.556 [2024-04-27 00:55:03.001411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.556 [2024-04-27 00:55:03.001415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001420] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b7e0) on tqpair=0x614000002040 00:21:10.556 [2024-04-27 00:55:03.001438] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.556 [2024-04-27 00:55:03.001444] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.556 [2024-04-27 00:55:03.001448] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001452] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b680) on tqpair=0x614000002040 00:21:10.556 [2024-04-27 00:55:03.001464] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.556 [2024-04-27 00:55:03.001470] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.556 [2024-04-27 00:55:03.001474] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001478] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b940) on tqpair=0x614000002040 00:21:10.556 [2024-04-27 00:55:03.001487] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.556 [2024-04-27 00:55:03.001493] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.556 [2024-04-27 00:55:03.001497] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.556 [2024-04-27 00:55:03.001501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:21:10.556 ===================================================== 00:21:10.556 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:10.556 ===================================================== 00:21:10.556 Controller Capabilities/Features 00:21:10.556 ================================ 00:21:10.556 Vendor ID: 8086 00:21:10.556 Subsystem Vendor ID: 8086 00:21:10.556 Serial Number: SPDK00000000000001 00:21:10.556 Model Number: SPDK bdev Controller 00:21:10.556 Firmware Version: 24.05 00:21:10.556 Recommended Arb Burst: 6 00:21:10.556 IEEE OUI Identifier: e4 d2 5c 00:21:10.556 Multi-path I/O 00:21:10.556 May have multiple subsystem ports: Yes 00:21:10.556 May have multiple controllers: Yes 00:21:10.556 Associated with SR-IOV VF: No 00:21:10.556 Max Data Transfer Size: 131072 00:21:10.556 Max Number of Namespaces: 32 00:21:10.556 Max Number of I/O Queues: 127 00:21:10.556 NVMe Specification Version (VS): 1.3 00:21:10.556 NVMe Specification Version (Identify): 1.3 00:21:10.556 Maximum Queue Entries: 128 00:21:10.556 Contiguous Queues Required: Yes 00:21:10.556 Arbitration Mechanisms Supported 00:21:10.556 Weighted Round Robin: Not Supported 00:21:10.556 Vendor Specific: Not Supported 00:21:10.556 Reset Timeout: 15000 ms 00:21:10.556 Doorbell Stride: 4 bytes 00:21:10.556 NVM Subsystem Reset: Not Supported 00:21:10.556 Command Sets Supported 00:21:10.556 NVM Command Set: Supported 00:21:10.556 Boot Partition: Not Supported 00:21:10.556 Memory Page Size Minimum: 4096 bytes 00:21:10.556 Memory Page Size Maximum: 4096 bytes 00:21:10.556 Persistent Memory Region: Not Supported 00:21:10.556 Optional Asynchronous Events Supported 00:21:10.556 Namespace Attribute Notices: Supported 00:21:10.556 Firmware Activation Notices: Not Supported 00:21:10.556 ANA Change Notices: Not Supported 00:21:10.556 PLE Aggregate Log Change Notices: Not Supported 00:21:10.556 LBA Status Info Alert Notices: Not Supported 00:21:10.556 EGE Aggregate Log Change Notices: Not Supported 00:21:10.556 Normal NVM Subsystem Shutdown event: Not Supported 00:21:10.556 Zone Descriptor Change Notices: Not Supported 00:21:10.556 Discovery Log Change Notices: Not Supported 00:21:10.556 Controller Attributes 00:21:10.556 128-bit Host Identifier: Supported 00:21:10.556 Non-Operational Permissive Mode: Not Supported 00:21:10.556 NVM Sets: Not Supported 00:21:10.556 Read Recovery Levels: Not Supported 00:21:10.556 Endurance Groups: Not Supported 00:21:10.556 Predictable Latency Mode: Not Supported 00:21:10.556 Traffic Based Keep ALive: Not Supported 00:21:10.556 Namespace Granularity: Not Supported 00:21:10.556 SQ Associations: Not Supported 00:21:10.556 UUID List: Not Supported 00:21:10.556 Multi-Domain Subsystem: Not Supported 00:21:10.556 Fixed Capacity Management: Not Supported 00:21:10.556 Variable Capacity Management: Not Supported 00:21:10.556 Delete Endurance Group: Not Supported 00:21:10.556 Delete NVM Set: Not Supported 00:21:10.556 Extended LBA Formats Supported: Not Supported 00:21:10.556 Flexible Data Placement Supported: Not Supported 00:21:10.556 00:21:10.556 Controller Memory Buffer Support 00:21:10.556 ================================ 00:21:10.556 Supported: No 00:21:10.556 00:21:10.556 Persistent Memory Region Support 00:21:10.556 ================================ 00:21:10.556 Supported: No 00:21:10.556 00:21:10.556 Admin Command Set Attributes 00:21:10.556 ============================ 00:21:10.556 Security Send/Receive: Not Supported 00:21:10.556 Format NVM: Not Supported 00:21:10.556 Firmware Activate/Download: Not Supported 00:21:10.556 Namespace Management: Not Supported 00:21:10.556 Device Self-Test: Not Supported 00:21:10.556 Directives: Not Supported 00:21:10.556 NVMe-MI: Not Supported 00:21:10.556 Virtualization Management: Not Supported 00:21:10.556 Doorbell Buffer Config: Not Supported 00:21:10.556 Get LBA Status Capability: Not Supported 00:21:10.556 Command & Feature Lockdown Capability: Not Supported 00:21:10.556 Abort Command Limit: 4 00:21:10.556 Async Event Request Limit: 4 00:21:10.556 Number of Firmware Slots: N/A 00:21:10.556 Firmware Slot 1 Read-Only: N/A 00:21:10.556 Firmware Activation Without Reset: N/A 00:21:10.556 Multiple Update Detection Support: N/A 00:21:10.556 Firmware Update Granularity: No Information Provided 00:21:10.556 Per-Namespace SMART Log: No 00:21:10.556 Asymmetric Namespace Access Log Page: Not Supported 00:21:10.556 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:10.556 Command Effects Log Page: Supported 00:21:10.556 Get Log Page Extended Data: Supported 00:21:10.556 Telemetry Log Pages: Not Supported 00:21:10.556 Persistent Event Log Pages: Not Supported 00:21:10.556 Supported Log Pages Log Page: May Support 00:21:10.556 Commands Supported & Effects Log Page: Not Supported 00:21:10.556 Feature Identifiers & Effects Log Page:May Support 00:21:10.556 NVMe-MI Commands & Effects Log Page: May Support 00:21:10.556 Data Area 4 for Telemetry Log: Not Supported 00:21:10.556 Error Log Page Entries Supported: 128 00:21:10.556 Keep Alive: Supported 00:21:10.556 Keep Alive Granularity: 10000 ms 00:21:10.556 00:21:10.557 NVM Command Set Attributes 00:21:10.557 ========================== 00:21:10.557 Submission Queue Entry Size 00:21:10.557 Max: 64 00:21:10.557 Min: 64 00:21:10.557 Completion Queue Entry Size 00:21:10.557 Max: 16 00:21:10.557 Min: 16 00:21:10.557 Number of Namespaces: 32 00:21:10.557 Compare Command: Supported 00:21:10.557 Write Uncorrectable Command: Not Supported 00:21:10.557 Dataset Management Command: Supported 00:21:10.557 Write Zeroes Command: Supported 00:21:10.557 Set Features Save Field: Not Supported 00:21:10.557 Reservations: Supported 00:21:10.557 Timestamp: Not Supported 00:21:10.557 Copy: Supported 00:21:10.557 Volatile Write Cache: Present 00:21:10.557 Atomic Write Unit (Normal): 1 00:21:10.557 Atomic Write Unit (PFail): 1 00:21:10.557 Atomic Compare & Write Unit: 1 00:21:10.557 Fused Compare & Write: Supported 00:21:10.557 Scatter-Gather List 00:21:10.557 SGL Command Set: Supported 00:21:10.557 SGL Keyed: Supported 00:21:10.557 SGL Bit Bucket Descriptor: Not Supported 00:21:10.557 SGL Metadata Pointer: Not Supported 00:21:10.557 Oversized SGL: Not Supported 00:21:10.557 SGL Metadata Address: Not Supported 00:21:10.557 SGL Offset: Supported 00:21:10.557 Transport SGL Data Block: Not Supported 00:21:10.557 Replay Protected Memory Block: Not Supported 00:21:10.557 00:21:10.557 Firmware Slot Information 00:21:10.557 ========================= 00:21:10.557 Active slot: 1 00:21:10.557 Slot 1 Firmware Revision: 24.05 00:21:10.557 00:21:10.557 00:21:10.557 Commands Supported and Effects 00:21:10.557 ============================== 00:21:10.557 Admin Commands 00:21:10.557 -------------- 00:21:10.557 Get Log Page (02h): Supported 00:21:10.557 Identify (06h): Supported 00:21:10.557 Abort (08h): Supported 00:21:10.557 Set Features (09h): Supported 00:21:10.557 Get Features (0Ah): Supported 00:21:10.557 Asynchronous Event Request (0Ch): Supported 00:21:10.557 Keep Alive (18h): Supported 00:21:10.557 I/O Commands 00:21:10.557 ------------ 00:21:10.557 Flush (00h): Supported LBA-Change 00:21:10.557 Write (01h): Supported LBA-Change 00:21:10.557 Read (02h): Supported 00:21:10.557 Compare (05h): Supported 00:21:10.557 Write Zeroes (08h): Supported LBA-Change 00:21:10.557 Dataset Management (09h): Supported LBA-Change 00:21:10.557 Copy (19h): Supported LBA-Change 00:21:10.557 Unknown (79h): Supported LBA-Change 00:21:10.557 Unknown (7Ah): Supported 00:21:10.557 00:21:10.557 Error Log 00:21:10.557 ========= 00:21:10.557 00:21:10.557 Arbitration 00:21:10.557 =========== 00:21:10.557 Arbitration Burst: 1 00:21:10.557 00:21:10.557 Power Management 00:21:10.557 ================ 00:21:10.557 Number of Power States: 1 00:21:10.557 Current Power State: Power State #0 00:21:10.557 Power State #0: 00:21:10.557 Max Power: 0.00 W 00:21:10.557 Non-Operational State: Operational 00:21:10.557 Entry Latency: Not Reported 00:21:10.557 Exit Latency: Not Reported 00:21:10.557 Relative Read Throughput: 0 00:21:10.557 Relative Read Latency: 0 00:21:10.557 Relative Write Throughput: 0 00:21:10.557 Relative Write Latency: 0 00:21:10.557 Idle Power: Not Reported 00:21:10.557 Active Power: Not Reported 00:21:10.557 Non-Operational Permissive Mode: Not Supported 00:21:10.557 00:21:10.557 Health Information 00:21:10.557 ================== 00:21:10.557 Critical Warnings: 00:21:10.557 Available Spare Space: OK 00:21:10.557 Temperature: OK 00:21:10.557 Device Reliability: OK 00:21:10.557 Read Only: No 00:21:10.557 Volatile Memory Backup: OK 00:21:10.557 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:10.557 Temperature Threshold: [2024-04-27 00:55:03.001630] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.557 [2024-04-27 00:55:03.001637] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x614000002040) 00:21:10.557 [2024-04-27 00:55:03.001648] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.557 [2024-04-27 00:55:03.001662] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001baa0, cid 7, qid 0 00:21:10.557 [2024-04-27 00:55:03.001748] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.557 [2024-04-27 00:55:03.001756] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.557 [2024-04-27 00:55:03.001761] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.557 [2024-04-27 00:55:03.001766] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001baa0) on tqpair=0x614000002040 00:21:10.557 [2024-04-27 00:55:03.001803] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:21:10.557 [2024-04-27 00:55:03.001816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.557 [2024-04-27 00:55:03.001824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.557 [2024-04-27 00:55:03.001830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.557 [2024-04-27 00:55:03.001837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:10.557 [2024-04-27 00:55:03.001846] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.557 [2024-04-27 00:55:03.001852] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.557 [2024-04-27 00:55:03.001860] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.557 [2024-04-27 00:55:03.001870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.557 [2024-04-27 00:55:03.001882] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.557 [2024-04-27 00:55:03.001963] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.557 [2024-04-27 00:55:03.001970] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.557 [2024-04-27 00:55:03.001975] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.001979] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.001989] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.001994] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002002] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002023] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002126] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002132] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002136] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002140] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002146] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:21:10.558 [2024-04-27 00:55:03.002153] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:21:10.558 [2024-04-27 00:55:03.002163] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002168] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002173] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002273] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002283] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002288] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002297] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002302] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002306] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002324] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002404] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002411] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002415] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002419] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002428] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002433] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002437] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002445] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002455] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002538] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002544] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002548] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002552] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002564] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002568] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002573] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002590] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002668] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002676] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002680] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002684] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002693] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002700] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002721] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002795] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002801] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002805] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002810] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002822] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002827] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002831] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.002921] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.002929] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.002933] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002937] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.002946] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002951] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.002955] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.558 [2024-04-27 00:55:03.002963] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.558 [2024-04-27 00:55:03.002972] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.558 [2024-04-27 00:55:03.003043] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.558 [2024-04-27 00:55:03.003049] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.558 [2024-04-27 00:55:03.003053] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.558 [2024-04-27 00:55:03.003057] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.558 [2024-04-27 00:55:03.003068] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003072] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003077] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003094] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003182] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003186] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003195] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003200] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003204] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003229] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003308] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003323] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003332] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003336] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003341] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003357] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003441] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003449] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003466] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003470] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003475] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003482] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003492] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003581] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003589] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003593] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003597] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003606] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003610] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003615] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003622] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003632] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003722] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003727] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003738] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003742] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003746] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003849] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003855] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003859] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003864] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003873] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003878] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003882] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.003890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.003899] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.003976] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.003982] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.003986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.003990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.003999] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004004] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004008] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.004015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.004026] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.004108] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.004114] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.004117] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004122] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.004131] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004135] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004139] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.004147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.004157] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.004239] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.559 [2024-04-27 00:55:03.004247] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.559 [2024-04-27 00:55:03.004251] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004255] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.559 [2024-04-27 00:55:03.004264] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004269] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.559 [2024-04-27 00:55:03.004273] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.559 [2024-04-27 00:55:03.004283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.559 [2024-04-27 00:55:03.004292] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.559 [2024-04-27 00:55:03.004375] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.004382] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.004385] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004390] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.004401] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004405] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004409] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.004417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.004427] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.004510] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.004516] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.004520] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004524] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.004533] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004539] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.004551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.004560] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.004643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.004649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.004653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004657] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.004666] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004671] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004679] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.004688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.004697] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.004784] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.004792] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.004796] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004800] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.004809] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004813] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004817] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.004825] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.004836] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.004920] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.004926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.004930] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004934] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.004943] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.004952] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.004959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.004968] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.005051] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.005058] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.005062] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005067] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.005076] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005080] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005084] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.005092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.005102] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.005182] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.005188] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.005192] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005196] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.005207] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.005215] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x614000002040) 00:21:10.560 [2024-04-27 00:55:03.009227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:10.560 [2024-04-27 00:55:03.009240] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x62600001b520, cid 3, qid 0 00:21:10.560 [2024-04-27 00:55:03.009317] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:10.560 [2024-04-27 00:55:03.009323] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:10.560 [2024-04-27 00:55:03.009327] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:10.560 [2024-04-27 00:55:03.009331] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x62600001b520) on tqpair=0x614000002040 00:21:10.560 [2024-04-27 00:55:03.009339] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:21:10.560 0 Kelvin (-273 Celsius) 00:21:10.560 Available Spare: 0% 00:21:10.560 Available Spare Threshold: 0% 00:21:10.560 Life Percentage Used: 0% 00:21:10.560 Data Units Read: 0 00:21:10.560 Data Units Written: 0 00:21:10.560 Host Read Commands: 0 00:21:10.560 Host Write Commands: 0 00:21:10.560 Controller Busy Time: 0 minutes 00:21:10.560 Power Cycles: 0 00:21:10.560 Power On Hours: 0 hours 00:21:10.560 Unsafe Shutdowns: 0 00:21:10.560 Unrecoverable Media Errors: 0 00:21:10.560 Lifetime Error Log Entries: 0 00:21:10.560 Warning Temperature Time: 0 minutes 00:21:10.560 Critical Temperature Time: 0 minutes 00:21:10.560 00:21:10.560 Number of Queues 00:21:10.560 ================ 00:21:10.560 Number of I/O Submission Queues: 127 00:21:10.560 Number of I/O Completion Queues: 127 00:21:10.560 00:21:10.560 Active Namespaces 00:21:10.560 ================= 00:21:10.560 Namespace ID:1 00:21:10.560 Error Recovery Timeout: Unlimited 00:21:10.560 Command Set Identifier: NVM (00h) 00:21:10.560 Deallocate: Supported 00:21:10.560 Deallocated/Unwritten Error: Not Supported 00:21:10.560 Deallocated Read Value: Unknown 00:21:10.561 Deallocate in Write Zeroes: Not Supported 00:21:10.561 Deallocated Guard Field: 0xFFFF 00:21:10.561 Flush: Supported 00:21:10.561 Reservation: Supported 00:21:10.561 Namespace Sharing Capabilities: Multiple Controllers 00:21:10.561 Size (in LBAs): 131072 (0GiB) 00:21:10.561 Capacity (in LBAs): 131072 (0GiB) 00:21:10.561 Utilization (in LBAs): 131072 (0GiB) 00:21:10.561 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:10.561 EUI64: ABCDEF0123456789 00:21:10.561 UUID: ba459928-1120-46da-accb-864eadff9a44 00:21:10.561 Thin Provisioning: Not Supported 00:21:10.561 Per-NS Atomic Units: Yes 00:21:10.561 Atomic Boundary Size (Normal): 0 00:21:10.561 Atomic Boundary Size (PFail): 0 00:21:10.561 Atomic Boundary Offset: 0 00:21:10.561 Maximum Single Source Range Length: 65535 00:21:10.561 Maximum Copy Length: 65535 00:21:10.561 Maximum Source Range Count: 1 00:21:10.561 NGUID/EUI64 Never Reused: No 00:21:10.561 Namespace Write Protected: No 00:21:10.561 Number of LBA Formats: 1 00:21:10.561 Current LBA Format: LBA Format #00 00:21:10.561 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:10.561 00:21:10.561 00:55:03 -- host/identify.sh@51 -- # sync 00:21:10.561 00:55:03 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:10.561 00:55:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.561 00:55:03 -- common/autotest_common.sh@10 -- # set +x 00:21:10.561 00:55:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.561 00:55:03 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:10.561 00:55:03 -- host/identify.sh@56 -- # nvmftestfini 00:21:10.561 00:55:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:10.561 00:55:03 -- nvmf/common.sh@117 -- # sync 00:21:10.561 00:55:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:10.561 00:55:03 -- nvmf/common.sh@120 -- # set +e 00:21:10.561 00:55:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:10.561 00:55:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:10.561 rmmod nvme_tcp 00:21:10.561 rmmod nvme_fabrics 00:21:10.561 rmmod nvme_keyring 00:21:10.561 00:55:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:10.561 00:55:03 -- nvmf/common.sh@124 -- # set -e 00:21:10.561 00:55:03 -- nvmf/common.sh@125 -- # return 0 00:21:10.561 00:55:03 -- nvmf/common.sh@478 -- # '[' -n 2823754 ']' 00:21:10.561 00:55:03 -- nvmf/common.sh@479 -- # killprocess 2823754 00:21:10.561 00:55:03 -- common/autotest_common.sh@936 -- # '[' -z 2823754 ']' 00:21:10.561 00:55:03 -- common/autotest_common.sh@940 -- # kill -0 2823754 00:21:10.561 00:55:03 -- common/autotest_common.sh@941 -- # uname 00:21:10.561 00:55:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:10.561 00:55:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2823754 00:21:10.561 00:55:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:10.561 00:55:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:10.561 00:55:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2823754' 00:21:10.561 killing process with pid 2823754 00:21:10.561 00:55:03 -- common/autotest_common.sh@955 -- # kill 2823754 00:21:10.561 [2024-04-27 00:55:03.169756] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:21:10.561 00:55:03 -- common/autotest_common.sh@960 -- # wait 2823754 00:21:11.128 00:55:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:11.128 00:55:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:11.128 00:55:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:11.128 00:55:03 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:11.128 00:55:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:11.128 00:55:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:11.128 00:55:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:11.128 00:55:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.693 00:55:05 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:13.693 00:21:13.693 real 0m9.817s 00:21:13.693 user 0m8.202s 00:21:13.693 sys 0m4.635s 00:21:13.693 00:55:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:13.693 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:21:13.693 ************************************ 00:21:13.693 END TEST nvmf_identify 00:21:13.693 ************************************ 00:21:13.693 00:55:05 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:13.693 00:55:05 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:13.693 00:55:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:13.693 00:55:05 -- common/autotest_common.sh@10 -- # set +x 00:21:13.693 ************************************ 00:21:13.693 START TEST nvmf_perf 00:21:13.693 ************************************ 00:21:13.693 00:55:05 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:13.693 * Looking for test storage... 00:21:13.693 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:13.693 00:55:06 -- host/perf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:13.693 00:55:06 -- nvmf/common.sh@7 -- # uname -s 00:21:13.693 00:55:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:13.693 00:55:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:13.693 00:55:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:13.693 00:55:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:13.693 00:55:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:13.693 00:55:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:13.693 00:55:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:13.693 00:55:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:13.693 00:55:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:13.693 00:55:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:13.693 00:55:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:21:13.693 00:55:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:21:13.693 00:55:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:13.693 00:55:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:13.693 00:55:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:13.693 00:55:06 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:13.693 00:55:06 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:13.693 00:55:06 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:13.693 00:55:06 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:13.693 00:55:06 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:13.693 00:55:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.693 00:55:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.693 00:55:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.693 00:55:06 -- paths/export.sh@5 -- # export PATH 00:21:13.693 00:55:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:13.693 00:55:06 -- nvmf/common.sh@47 -- # : 0 00:21:13.693 00:55:06 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:13.693 00:55:06 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:13.693 00:55:06 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:13.693 00:55:06 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:13.693 00:55:06 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:13.693 00:55:06 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:13.693 00:55:06 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:13.693 00:55:06 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:13.694 00:55:06 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:13.694 00:55:06 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:13.694 00:55:06 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:21:13.694 00:55:06 -- host/perf.sh@17 -- # nvmftestinit 00:21:13.694 00:55:06 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:13.694 00:55:06 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:13.694 00:55:06 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:13.694 00:55:06 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:13.694 00:55:06 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:13.694 00:55:06 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:13.694 00:55:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:13.694 00:55:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:13.694 00:55:06 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:13.694 00:55:06 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:13.694 00:55:06 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:13.694 00:55:06 -- common/autotest_common.sh@10 -- # set +x 00:21:18.992 00:55:11 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:18.992 00:55:11 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:18.992 00:55:11 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:18.992 00:55:11 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:18.992 00:55:11 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:18.992 00:55:11 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:18.992 00:55:11 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:18.992 00:55:11 -- nvmf/common.sh@295 -- # net_devs=() 00:21:18.992 00:55:11 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:18.992 00:55:11 -- nvmf/common.sh@296 -- # e810=() 00:21:18.992 00:55:11 -- nvmf/common.sh@296 -- # local -ga e810 00:21:18.992 00:55:11 -- nvmf/common.sh@297 -- # x722=() 00:21:18.992 00:55:11 -- nvmf/common.sh@297 -- # local -ga x722 00:21:18.992 00:55:11 -- nvmf/common.sh@298 -- # mlx=() 00:21:18.992 00:55:11 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:18.992 00:55:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:18.992 00:55:11 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:18.992 00:55:11 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:18.992 00:55:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.992 00:55:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:18.992 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:18.992 00:55:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:18.992 00:55:11 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:18.992 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:18.992 00:55:11 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:18.992 00:55:11 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.992 00:55:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.992 00:55:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:18.992 00:55:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.992 00:55:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:18.992 Found net devices under 0000:27:00.0: cvl_0_0 00:21:18.992 00:55:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.992 00:55:11 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:18.992 00:55:11 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:18.992 00:55:11 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:18.992 00:55:11 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:18.992 00:55:11 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:18.992 Found net devices under 0000:27:00.1: cvl_0_1 00:21:18.992 00:55:11 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:18.992 00:55:11 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:18.992 00:55:11 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:18.992 00:55:11 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:18.992 00:55:11 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:18.992 00:55:11 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:18.992 00:55:11 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:18.993 00:55:11 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:18.993 00:55:11 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:18.993 00:55:11 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:18.993 00:55:11 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:18.993 00:55:11 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:18.993 00:55:11 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:18.993 00:55:11 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:18.993 00:55:11 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:18.993 00:55:11 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:18.993 00:55:11 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:18.993 00:55:11 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:18.993 00:55:11 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:18.993 00:55:11 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:18.993 00:55:11 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:18.993 00:55:11 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:19.253 00:55:11 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:19.253 00:55:11 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:19.253 00:55:11 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:19.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:19.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.338 ms 00:21:19.253 00:21:19.253 --- 10.0.0.2 ping statistics --- 00:21:19.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.253 rtt min/avg/max/mdev = 0.338/0.338/0.338/0.000 ms 00:21:19.253 00:55:11 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:19.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:19.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.044 ms 00:21:19.253 00:21:19.253 --- 10.0.0.1 ping statistics --- 00:21:19.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:19.253 rtt min/avg/max/mdev = 0.044/0.044/0.044/0.000 ms 00:21:19.253 00:55:11 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:19.253 00:55:11 -- nvmf/common.sh@411 -- # return 0 00:21:19.253 00:55:11 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:19.253 00:55:11 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:19.253 00:55:11 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:19.253 00:55:11 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:19.253 00:55:11 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:19.253 00:55:11 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:19.253 00:55:11 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:19.253 00:55:11 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:19.253 00:55:11 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:19.253 00:55:11 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:19.253 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.253 00:55:11 -- nvmf/common.sh@470 -- # nvmfpid=2828212 00:21:19.253 00:55:11 -- nvmf/common.sh@471 -- # waitforlisten 2828212 00:21:19.253 00:55:11 -- common/autotest_common.sh@817 -- # '[' -z 2828212 ']' 00:21:19.253 00:55:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:19.253 00:55:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:19.253 00:55:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:19.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:19.253 00:55:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:19.253 00:55:11 -- common/autotest_common.sh@10 -- # set +x 00:21:19.253 00:55:11 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:19.253 [2024-04-27 00:55:11.862279] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:19.253 [2024-04-27 00:55:11.862384] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:19.253 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.514 [2024-04-27 00:55:11.985796] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:19.514 [2024-04-27 00:55:12.082199] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:19.514 [2024-04-27 00:55:12.082241] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:19.514 [2024-04-27 00:55:12.082253] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:19.514 [2024-04-27 00:55:12.082263] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:19.514 [2024-04-27 00:55:12.082270] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:19.514 [2024-04-27 00:55:12.082383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:19.514 [2024-04-27 00:55:12.082462] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.514 [2024-04-27 00:55:12.082562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:19.514 [2024-04-27 00:55:12.082572] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:20.083 00:55:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:20.083 00:55:12 -- common/autotest_common.sh@850 -- # return 0 00:21:20.083 00:55:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:20.083 00:55:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:20.083 00:55:12 -- common/autotest_common.sh@10 -- # set +x 00:21:20.083 00:55:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:20.083 00:55:12 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:20.083 00:55:12 -- host/perf.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:30.059 00:55:21 -- host/perf.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:30.059 00:55:21 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:30.059 00:55:21 -- host/perf.sh@30 -- # local_nvme_trid=0000:c9:00.0 00:21:30.059 00:55:21 -- host/perf.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:30.059 00:55:21 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:30.059 00:55:21 -- host/perf.sh@33 -- # '[' -n 0000:c9:00.0 ']' 00:21:30.059 00:55:21 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:30.059 00:55:21 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:30.059 00:55:21 -- host/perf.sh@42 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:30.059 [2024-04-27 00:55:21.689183] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:30.059 00:55:21 -- host/perf.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:30.059 00:55:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:30.059 00:55:21 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:30.059 00:55:21 -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:30.059 00:55:21 -- host/perf.sh@46 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:30.059 00:55:22 -- host/perf.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:30.059 [2024-04-27 00:55:22.283829] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:30.059 00:55:22 -- host/perf.sh@49 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:30.059 00:55:22 -- host/perf.sh@52 -- # '[' -n 0000:c9:00.0 ']' 00:21:30.059 00:55:22 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:21:30.059 00:55:22 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:21:30.059 00:55:22 -- host/perf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:c9:00.0' 00:21:31.433 Initializing NVMe Controllers 00:21:31.433 Attached to NVMe Controller at 0000:c9:00.0 [8086:0a54] 00:21:31.433 Associating PCIE (0000:c9:00.0) NSID 1 with lcore 0 00:21:31.433 Initialization complete. Launching workers. 00:21:31.433 ======================================================== 00:21:31.433 Latency(us) 00:21:31.433 Device Information : IOPS MiB/s Average min max 00:21:31.433 PCIE (0000:c9:00.0) NSID 1 from core 0: 96127.74 375.50 332.53 30.22 8224.56 00:21:31.433 ======================================================== 00:21:31.433 Total : 96127.74 375.50 332.53 30.22 8224.56 00:21:31.433 00:21:31.433 00:55:23 -- host/perf.sh@56 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:31.433 EAL: No free 2048 kB hugepages reported on node 1 00:21:32.812 Initializing NVMe Controllers 00:21:32.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:32.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:32.812 Initialization complete. Launching workers. 00:21:32.812 ======================================================== 00:21:32.812 Latency(us) 00:21:32.812 Device Information : IOPS MiB/s Average min max 00:21:32.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 128.00 0.50 7939.35 112.48 45095.90 00:21:32.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 42.00 0.16 24164.78 7964.20 48023.67 00:21:32.812 ======================================================== 00:21:32.812 Total : 170.00 0.66 11947.98 112.48 48023.67 00:21:32.812 00:21:32.812 00:55:25 -- host/perf.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:32.812 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.193 Initializing NVMe Controllers 00:21:34.193 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:34.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:34.193 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:34.193 Initialization complete. Launching workers. 00:21:34.193 ======================================================== 00:21:34.193 Latency(us) 00:21:34.193 Device Information : IOPS MiB/s Average min max 00:21:34.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11132.07 43.48 2874.66 312.78 6494.13 00:21:34.193 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3869.33 15.11 8289.11 6548.30 16192.17 00:21:34.193 ======================================================== 00:21:34.193 Total : 15001.40 58.60 4271.22 312.78 16192.17 00:21:34.193 00:21:34.193 00:55:26 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:21:34.193 00:55:26 -- host/perf.sh@60 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:21:34.193 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.730 Initializing NVMe Controllers 00:21:36.730 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:36.730 Controller IO queue size 128, less than required. 00:21:36.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.730 Controller IO queue size 128, less than required. 00:21:36.730 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:36.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:36.730 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:36.730 Initialization complete. Launching workers. 00:21:36.730 ======================================================== 00:21:36.730 Latency(us) 00:21:36.730 Device Information : IOPS MiB/s Average min max 00:21:36.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2181.40 545.35 59332.04 35854.45 127143.92 00:21:36.730 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 619.48 154.87 222531.17 77944.61 355984.22 00:21:36.730 ======================================================== 00:21:36.730 Total : 2800.88 700.22 95427.23 35854.45 355984.22 00:21:36.730 00:21:36.730 00:55:29 -- host/perf.sh@64 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:21:37.081 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.081 No valid NVMe controllers or AIO or URING devices found 00:21:37.081 Initializing NVMe Controllers 00:21:37.081 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:37.081 Controller IO queue size 128, less than required. 00:21:37.081 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.081 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:21:37.082 Controller IO queue size 128, less than required. 00:21:37.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:37.082 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:21:37.082 WARNING: Some requested NVMe devices were skipped 00:21:37.082 00:55:29 -- host/perf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:21:37.082 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.389 Initializing NVMe Controllers 00:21:40.389 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:40.389 Controller IO queue size 128, less than required. 00:21:40.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.389 Controller IO queue size 128, less than required. 00:21:40.389 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:40.389 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:40.389 Initialization complete. Launching workers. 00:21:40.389 00:21:40.389 ==================== 00:21:40.389 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:21:40.389 TCP transport: 00:21:40.389 polls: 15097 00:21:40.389 idle_polls: 7663 00:21:40.389 sock_completions: 7434 00:21:40.389 nvme_completions: 8001 00:21:40.389 submitted_requests: 12004 00:21:40.389 queued_requests: 1 00:21:40.389 00:21:40.389 ==================== 00:21:40.389 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:21:40.389 TCP transport: 00:21:40.389 polls: 19513 00:21:40.389 idle_polls: 11430 00:21:40.389 sock_completions: 8083 00:21:40.389 nvme_completions: 7889 00:21:40.389 submitted_requests: 11914 00:21:40.389 queued_requests: 1 00:21:40.389 ======================================================== 00:21:40.389 Latency(us) 00:21:40.389 Device Information : IOPS MiB/s Average min max 00:21:40.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1995.95 498.99 65587.66 40247.96 172794.64 00:21:40.389 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1968.00 492.00 65332.38 38084.94 141356.92 00:21:40.389 ======================================================== 00:21:40.389 Total : 3963.95 990.99 65460.92 38084.94 172794.64 00:21:40.389 00:21:40.389 00:55:32 -- host/perf.sh@66 -- # sync 00:21:40.389 00:55:32 -- host/perf.sh@67 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:40.389 00:55:32 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:21:40.389 00:55:32 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:40.389 00:55:32 -- host/perf.sh@114 -- # nvmftestfini 00:21:40.389 00:55:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:40.389 00:55:32 -- nvmf/common.sh@117 -- # sync 00:21:40.389 00:55:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:40.389 00:55:32 -- nvmf/common.sh@120 -- # set +e 00:21:40.389 00:55:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:40.389 00:55:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:40.389 rmmod nvme_tcp 00:21:40.389 rmmod nvme_fabrics 00:21:40.389 rmmod nvme_keyring 00:21:40.389 00:55:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:40.389 00:55:32 -- nvmf/common.sh@124 -- # set -e 00:21:40.389 00:55:32 -- nvmf/common.sh@125 -- # return 0 00:21:40.389 00:55:32 -- nvmf/common.sh@478 -- # '[' -n 2828212 ']' 00:21:40.389 00:55:32 -- nvmf/common.sh@479 -- # killprocess 2828212 00:21:40.389 00:55:32 -- common/autotest_common.sh@936 -- # '[' -z 2828212 ']' 00:21:40.389 00:55:32 -- common/autotest_common.sh@940 -- # kill -0 2828212 00:21:40.389 00:55:32 -- common/autotest_common.sh@941 -- # uname 00:21:40.389 00:55:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:40.389 00:55:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2828212 00:21:40.389 00:55:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:40.389 00:55:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:40.389 00:55:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2828212' 00:21:40.389 killing process with pid 2828212 00:21:40.389 00:55:32 -- common/autotest_common.sh@955 -- # kill 2828212 00:21:40.389 00:55:32 -- common/autotest_common.sh@960 -- # wait 2828212 00:21:43.683 00:55:36 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:43.683 00:55:36 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:43.683 00:55:36 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:43.683 00:55:36 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.683 00:55:36 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.683 00:55:36 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.683 00:55:36 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:43.683 00:55:36 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.588 00:55:38 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:45.847 00:21:45.847 real 0m32.351s 00:21:45.847 user 1m35.006s 00:21:45.847 sys 0m7.102s 00:21:45.847 00:55:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:45.847 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:21:45.847 ************************************ 00:21:45.847 END TEST nvmf_perf 00:21:45.847 ************************************ 00:21:45.847 00:55:38 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:45.847 00:55:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:45.847 00:55:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:45.847 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:21:45.847 ************************************ 00:21:45.847 START TEST nvmf_fio_host 00:21:45.847 ************************************ 00:21:45.847 00:55:38 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:21:45.847 * Looking for test storage... 00:21:45.847 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:21:45.847 00:55:38 -- host/fio.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:45.847 00:55:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.847 00:55:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.847 00:55:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.847 00:55:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.847 00:55:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.847 00:55:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.847 00:55:38 -- paths/export.sh@5 -- # export PATH 00:21:45.848 00:55:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.848 00:55:38 -- host/fio.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.848 00:55:38 -- nvmf/common.sh@7 -- # uname -s 00:21:45.848 00:55:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.848 00:55:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.848 00:55:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.848 00:55:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.848 00:55:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.848 00:55:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.848 00:55:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.848 00:55:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.848 00:55:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.848 00:55:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.848 00:55:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:21:45.848 00:55:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:21:45.848 00:55:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.848 00:55:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.848 00:55:38 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:21:45.848 00:55:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.848 00:55:38 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:21:45.848 00:55:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.848 00:55:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.848 00:55:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.848 00:55:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.848 00:55:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.848 00:55:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.848 00:55:38 -- paths/export.sh@5 -- # export PATH 00:21:45.848 00:55:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.848 00:55:38 -- nvmf/common.sh@47 -- # : 0 00:21:45.848 00:55:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.848 00:55:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.848 00:55:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.848 00:55:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.848 00:55:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.848 00:55:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.848 00:55:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.848 00:55:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.848 00:55:38 -- host/fio.sh@12 -- # nvmftestinit 00:21:45.848 00:55:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:45.848 00:55:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.848 00:55:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:45.848 00:55:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:45.848 00:55:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:45.848 00:55:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.848 00:55:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:45.848 00:55:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.848 00:55:38 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:21:45.848 00:55:38 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:21:45.848 00:55:38 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.848 00:55:38 -- common/autotest_common.sh@10 -- # set +x 00:21:51.120 00:55:43 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:51.120 00:55:43 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:51.120 00:55:43 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:51.120 00:55:43 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:51.120 00:55:43 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:51.120 00:55:43 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:51.120 00:55:43 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:51.120 00:55:43 -- nvmf/common.sh@295 -- # net_devs=() 00:21:51.120 00:55:43 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:51.120 00:55:43 -- nvmf/common.sh@296 -- # e810=() 00:21:51.120 00:55:43 -- nvmf/common.sh@296 -- # local -ga e810 00:21:51.120 00:55:43 -- nvmf/common.sh@297 -- # x722=() 00:21:51.120 00:55:43 -- nvmf/common.sh@297 -- # local -ga x722 00:21:51.120 00:55:43 -- nvmf/common.sh@298 -- # mlx=() 00:21:51.120 00:55:43 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:51.120 00:55:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:51.120 00:55:43 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:51.120 00:55:43 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.120 00:55:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:21:51.120 Found 0000:27:00.0 (0x8086 - 0x159b) 00:21:51.120 00:55:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:51.120 00:55:43 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:21:51.120 Found 0000:27:00.1 (0x8086 - 0x159b) 00:21:51.120 00:55:43 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.120 00:55:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.120 00:55:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.120 00:55:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:21:51.120 Found net devices under 0000:27:00.0: cvl_0_0 00:21:51.120 00:55:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.120 00:55:43 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:51.120 00:55:43 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:51.120 00:55:43 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:51.120 00:55:43 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:21:51.120 Found net devices under 0000:27:00.1: cvl_0_1 00:21:51.120 00:55:43 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:21:51.120 00:55:43 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@403 -- # is_hw=yes 00:21:51.120 00:55:43 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:21:51.120 00:55:43 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:51.120 00:55:43 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:51.120 00:55:43 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:51.120 00:55:43 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:51.120 00:55:43 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:51.120 00:55:43 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:51.120 00:55:43 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:51.120 00:55:43 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:51.120 00:55:43 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:51.120 00:55:43 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:51.120 00:55:43 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:51.120 00:55:43 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:51.120 00:55:43 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:51.120 00:55:43 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:51.120 00:55:43 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:51.120 00:55:43 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:51.120 00:55:43 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:51.120 00:55:43 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:51.120 00:55:43 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:51.120 00:55:43 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:51.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:51.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.574 ms 00:21:51.120 00:21:51.120 --- 10.0.0.2 ping statistics --- 00:21:51.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.120 rtt min/avg/max/mdev = 0.574/0.574/0.574/0.000 ms 00:21:51.120 00:55:43 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:51.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:51.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:21:51.120 00:21:51.120 --- 10.0.0.1 ping statistics --- 00:21:51.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:51.120 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:21:51.120 00:55:43 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:51.120 00:55:43 -- nvmf/common.sh@411 -- # return 0 00:21:51.120 00:55:43 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:51.120 00:55:43 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:51.120 00:55:43 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:51.120 00:55:43 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:51.120 00:55:43 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:51.120 00:55:43 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:51.379 00:55:43 -- host/fio.sh@14 -- # [[ y != y ]] 00:21:51.379 00:55:43 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:21:51.379 00:55:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:51.379 00:55:43 -- common/autotest_common.sh@10 -- # set +x 00:21:51.379 00:55:43 -- host/fio.sh@22 -- # nvmfpid=2836730 00:21:51.379 00:55:43 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:51.379 00:55:43 -- host/fio.sh@26 -- # waitforlisten 2836730 00:21:51.379 00:55:43 -- common/autotest_common.sh@817 -- # '[' -z 2836730 ']' 00:21:51.379 00:55:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:51.379 00:55:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:51.379 00:55:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:51.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:51.379 00:55:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:51.379 00:55:43 -- common/autotest_common.sh@10 -- # set +x 00:21:51.379 00:55:43 -- host/fio.sh@21 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:51.379 [2024-04-27 00:55:43.912584] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:21:51.379 [2024-04-27 00:55:43.912689] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:51.379 EAL: No free 2048 kB hugepages reported on node 1 00:21:51.379 [2024-04-27 00:55:44.033804] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:51.637 [2024-04-27 00:55:44.130751] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:51.637 [2024-04-27 00:55:44.130789] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:51.637 [2024-04-27 00:55:44.130799] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:51.637 [2024-04-27 00:55:44.130808] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:51.637 [2024-04-27 00:55:44.130815] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:51.637 [2024-04-27 00:55:44.130891] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.637 [2024-04-27 00:55:44.130986] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:51.637 [2024-04-27 00:55:44.131086] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.637 [2024-04-27 00:55:44.131096] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:52.206 00:55:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:52.206 00:55:44 -- common/autotest_common.sh@850 -- # return 0 00:21:52.206 00:55:44 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 [2024-04-27 00:55:44.610864] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:21:52.206 00:55:44 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 00:55:44 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 Malloc1 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 [2024-04-27 00:55:44.711066] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:52.206 00:55:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:52.206 00:55:44 -- common/autotest_common.sh@10 -- # set +x 00:21:52.206 00:55:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:52.206 00:55:44 -- host/fio.sh@36 -- # PLUGIN_DIR=/var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme 00:21:52.206 00:55:44 -- host/fio.sh@39 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.206 00:55:44 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.206 00:55:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:52.206 00:55:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:52.206 00:55:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:52.206 00:55:44 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:21:52.206 00:55:44 -- common/autotest_common.sh@1327 -- # shift 00:21:52.206 00:55:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:52.206 00:55:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:52.206 00:55:44 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:21:52.206 00:55:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:52.206 00:55:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:52.206 00:55:44 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:52.206 00:55:44 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:52.206 00:55:44 -- common/autotest_common.sh@1333 -- # break 00:21:52.206 00:55:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:52.206 00:55:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:21:52.467 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:21:52.467 fio-3.35 00:21:52.467 Starting 1 thread 00:21:52.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.255 00:21:55.255 test: (groupid=0, jobs=1): err= 0: pid=2837309: Sat Apr 27 00:55:47 2024 00:21:55.255 read: IOPS=12.0k, BW=46.9MiB/s (49.2MB/s)(94.1MiB/2005msec) 00:21:55.255 slat (nsec): min=1563, max=156529, avg=2293.14, stdev=1502.74 00:21:55.255 clat (usec): min=2172, max=10117, avg=5865.95, stdev=448.09 00:21:55.255 lat (usec): min=2198, max=10119, avg=5868.24, stdev=447.94 00:21:55.255 clat percentiles (usec): 00:21:55.255 | 1.00th=[ 4883], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:21:55.255 | 30.00th=[ 5669], 40.00th=[ 5735], 50.00th=[ 5866], 60.00th=[ 5932], 00:21:55.255 | 70.00th=[ 6063], 80.00th=[ 6194], 90.00th=[ 6390], 95.00th=[ 6521], 00:21:55.255 | 99.00th=[ 7111], 99.50th=[ 7439], 99.90th=[ 8586], 99.95th=[ 9372], 00:21:55.255 | 99.99th=[10159] 00:21:55.255 bw ( KiB/s): min=46776, max=48808, per=99.95%, avg=48014.00, stdev=892.31, samples=4 00:21:55.255 iops : min=11694, max=12202, avg=12003.50, stdev=223.08, samples=4 00:21:55.255 write: IOPS=12.0k, BW=46.7MiB/s (49.0MB/s)(93.7MiB/2005msec); 0 zone resets 00:21:55.255 slat (nsec): min=1616, max=139025, avg=2403.05, stdev=1172.33 00:21:55.255 clat (usec): min=1609, max=8948, avg=4754.57, stdev=371.02 00:21:55.255 lat (usec): min=1624, max=8950, avg=4756.97, stdev=370.98 00:21:55.255 clat percentiles (usec): 00:21:55.255 | 1.00th=[ 3982], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:21:55.255 | 30.00th=[ 4555], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4817], 00:21:55.255 | 70.00th=[ 4883], 80.00th=[ 5014], 90.00th=[ 5145], 95.00th=[ 5276], 00:21:55.255 | 99.00th=[ 5669], 99.50th=[ 5932], 99.90th=[ 7701], 99.95th=[ 8291], 00:21:55.255 | 99.99th=[ 8979] 00:21:55.255 bw ( KiB/s): min=47360, max=48448, per=100.00%, avg=47846.00, stdev=557.30, samples=4 00:21:55.255 iops : min=11840, max=12112, avg=11961.50, stdev=139.33, samples=4 00:21:55.255 lat (msec) : 2=0.02%, 4=0.65%, 10=99.32%, 20=0.01% 00:21:55.255 cpu : usr=84.23%, sys=15.37%, ctx=4, majf=0, minf=1528 00:21:55.255 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:21:55.255 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:55.255 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:55.255 issued rwts: total=24079,23978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:55.255 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:55.255 00:21:55.255 Run status group 0 (all jobs): 00:21:55.255 READ: bw=46.9MiB/s (49.2MB/s), 46.9MiB/s-46.9MiB/s (49.2MB/s-49.2MB/s), io=94.1MiB (98.6MB), run=2005-2005msec 00:21:55.255 WRITE: bw=46.7MiB/s (49.0MB/s), 46.7MiB/s-46.7MiB/s (49.0MB/s-49.0MB/s), io=93.7MiB (98.2MB), run=2005-2005msec 00:21:55.255 ----------------------------------------------------- 00:21:55.255 Suppressions used: 00:21:55.255 count bytes template 00:21:55.255 1 57 /usr/src/fio/parse.c 00:21:55.255 1 8 libtcmalloc_minimal.so 00:21:55.255 ----------------------------------------------------- 00:21:55.255 00:21:55.255 00:55:47 -- host/fio.sh@43 -- # fio_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.255 00:55:47 -- common/autotest_common.sh@1346 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.255 00:55:47 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:21:55.255 00:55:47 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:55.255 00:55:47 -- common/autotest_common.sh@1325 -- # local sanitizers 00:21:55.255 00:55:47 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.255 00:55:47 -- common/autotest_common.sh@1327 -- # shift 00:21:55.255 00:55:47 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:21:55.255 00:55:47 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:21:55.255 00:55:47 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme 00:21:55.255 00:55:47 -- common/autotest_common.sh@1331 -- # grep libasan 00:21:55.255 00:55:47 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:21:55.255 00:55:47 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:55.255 00:55:47 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:55.255 00:55:47 -- common/autotest_common.sh@1333 -- # break 00:21:55.255 00:55:47 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_nvme' 00:21:55.255 00:55:47 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /var/jenkins/workspace/dsa-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:21:55.834 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:21:55.834 fio-3.35 00:21:55.834 Starting 1 thread 00:21:55.834 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.369 [2024-04-27 00:55:50.780461] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000004080 is same with the state(5) to be set 00:21:58.369 00:21:58.369 test: (groupid=0, jobs=1): err= 0: pid=2838084: Sat Apr 27 00:55:50 2024 00:21:58.369 read: IOPS=8519, BW=133MiB/s (140MB/s)(267MiB/2004msec) 00:21:58.369 slat (usec): min=2, max=147, avg= 3.93, stdev= 1.81 00:21:58.369 clat (usec): min=2132, max=19137, avg=8941.69, stdev=3048.59 00:21:58.369 lat (usec): min=2135, max=19142, avg=8945.62, stdev=3049.38 00:21:58.369 clat percentiles (usec): 00:21:58.369 | 1.00th=[ 3851], 5.00th=[ 4621], 10.00th=[ 5211], 20.00th=[ 5997], 00:21:58.369 | 30.00th=[ 6980], 40.00th=[ 7767], 50.00th=[ 8586], 60.00th=[ 9372], 00:21:58.369 | 70.00th=[10552], 80.00th=[11863], 90.00th=[13304], 95.00th=[14615], 00:21:58.369 | 99.00th=[16057], 99.50th=[16712], 99.90th=[18220], 99.95th=[18220], 00:21:58.369 | 99.99th=[19006] 00:21:58.369 bw ( KiB/s): min=50528, max=90144, per=51.49%, avg=70192.00, stdev=18910.29, samples=4 00:21:58.369 iops : min= 3158, max= 5634, avg=4387.00, stdev=1181.89, samples=4 00:21:58.369 write: IOPS=5038, BW=78.7MiB/s (82.5MB/s)(143MiB/1821msec); 0 zone resets 00:21:58.369 slat (usec): min=28, max=599, avg=41.23, stdev=12.91 00:21:58.369 clat (usec): min=3135, max=20956, avg=10486.22, stdev=2681.03 00:21:58.369 lat (usec): min=3170, max=21009, avg=10527.45, stdev=2689.86 00:21:58.369 clat percentiles (usec): 00:21:58.369 | 1.00th=[ 5997], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7767], 00:21:58.370 | 30.00th=[ 8586], 40.00th=[ 9503], 50.00th=[10421], 60.00th=[11207], 00:21:58.370 | 70.00th=[12125], 80.00th=[13042], 90.00th=[14091], 95.00th=[15008], 00:21:58.370 | 99.00th=[16581], 99.50th=[17171], 99.90th=[17957], 99.95th=[18482], 00:21:58.370 | 99.99th=[20841] 00:21:58.370 bw ( KiB/s): min=52672, max=92608, per=90.28%, avg=72776.00, stdev=19275.48, samples=4 00:21:58.370 iops : min= 3292, max= 5788, avg=4548.50, stdev=1204.72, samples=4 00:21:58.370 lat (msec) : 4=0.99%, 10=57.29%, 20=41.72%, 50=0.01% 00:21:58.370 cpu : usr=84.82%, sys=14.73%, ctx=7, majf=0, minf=2236 00:21:58.370 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:21:58.370 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:58.370 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:58.370 issued rwts: total=17074,9175,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:58.370 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:58.370 00:21:58.370 Run status group 0 (all jobs): 00:21:58.370 READ: bw=133MiB/s (140MB/s), 133MiB/s-133MiB/s (140MB/s-140MB/s), io=267MiB (280MB), run=2004-2004msec 00:21:58.370 WRITE: bw=78.7MiB/s (82.5MB/s), 78.7MiB/s-78.7MiB/s (82.5MB/s-82.5MB/s), io=143MiB (150MB), run=1821-1821msec 00:21:58.370 ----------------------------------------------------- 00:21:58.370 Suppressions used: 00:21:58.370 count bytes template 00:21:58.370 1 57 /usr/src/fio/parse.c 00:21:58.370 924 88704 /usr/src/fio/iolog.c 00:21:58.370 1 8 libtcmalloc_minimal.so 00:21:58.370 ----------------------------------------------------- 00:21:58.370 00:21:58.370 00:55:51 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:58.370 00:55:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:58.370 00:55:51 -- common/autotest_common.sh@10 -- # set +x 00:21:58.370 00:55:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:58.370 00:55:51 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:21:58.370 00:55:51 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:21:58.370 00:55:51 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:21:58.370 00:55:51 -- host/fio.sh@84 -- # nvmftestfini 00:21:58.370 00:55:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:58.370 00:55:51 -- nvmf/common.sh@117 -- # sync 00:21:58.370 00:55:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:58.370 00:55:51 -- nvmf/common.sh@120 -- # set +e 00:21:58.370 00:55:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:58.370 00:55:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:58.370 rmmod nvme_tcp 00:21:58.370 rmmod nvme_fabrics 00:21:58.628 rmmod nvme_keyring 00:21:58.628 00:55:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:58.628 00:55:51 -- nvmf/common.sh@124 -- # set -e 00:21:58.628 00:55:51 -- nvmf/common.sh@125 -- # return 0 00:21:58.628 00:55:51 -- nvmf/common.sh@478 -- # '[' -n 2836730 ']' 00:21:58.628 00:55:51 -- nvmf/common.sh@479 -- # killprocess 2836730 00:21:58.629 00:55:51 -- common/autotest_common.sh@936 -- # '[' -z 2836730 ']' 00:21:58.629 00:55:51 -- common/autotest_common.sh@940 -- # kill -0 2836730 00:21:58.629 00:55:51 -- common/autotest_common.sh@941 -- # uname 00:21:58.629 00:55:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:58.629 00:55:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2836730 00:21:58.629 00:55:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:58.629 00:55:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:58.629 00:55:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2836730' 00:21:58.629 killing process with pid 2836730 00:21:58.629 00:55:51 -- common/autotest_common.sh@955 -- # kill 2836730 00:21:58.629 00:55:51 -- common/autotest_common.sh@960 -- # wait 2836730 00:21:59.193 00:55:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:59.193 00:55:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:59.193 00:55:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:59.193 00:55:51 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.193 00:55:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.193 00:55:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.193 00:55:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:59.193 00:55:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.097 00:55:53 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:01.097 00:22:01.097 real 0m15.323s 00:22:01.097 user 1m7.032s 00:22:01.097 sys 0m5.702s 00:22:01.097 00:55:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:01.097 00:55:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.097 ************************************ 00:22:01.097 END TEST nvmf_fio_host 00:22:01.097 ************************************ 00:22:01.097 00:55:53 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.097 00:55:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:01.097 00:55:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:01.097 00:55:53 -- common/autotest_common.sh@10 -- # set +x 00:22:01.357 ************************************ 00:22:01.357 START TEST nvmf_failover 00:22:01.357 ************************************ 00:22:01.357 00:55:53 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:01.357 * Looking for test storage... 00:22:01.357 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:01.357 00:55:53 -- host/failover.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:01.357 00:55:53 -- nvmf/common.sh@7 -- # uname -s 00:22:01.357 00:55:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:01.357 00:55:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:01.357 00:55:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:01.357 00:55:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:01.357 00:55:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:01.357 00:55:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:01.357 00:55:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:01.357 00:55:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:01.357 00:55:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:01.357 00:55:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:01.357 00:55:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:22:01.357 00:55:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:22:01.357 00:55:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:01.357 00:55:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:01.357 00:55:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:01.357 00:55:53 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:01.357 00:55:53 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:01.357 00:55:53 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:01.357 00:55:53 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:01.357 00:55:53 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:01.357 00:55:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.357 00:55:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.357 00:55:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.357 00:55:53 -- paths/export.sh@5 -- # export PATH 00:22:01.357 00:55:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:01.357 00:55:53 -- nvmf/common.sh@47 -- # : 0 00:22:01.357 00:55:53 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:01.357 00:55:53 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:01.357 00:55:53 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:01.357 00:55:53 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:01.357 00:55:53 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:01.357 00:55:53 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:01.357 00:55:53 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:01.357 00:55:53 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:01.357 00:55:53 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:01.357 00:55:53 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:01.357 00:55:53 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py 00:22:01.357 00:55:53 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:01.357 00:55:53 -- host/failover.sh@18 -- # nvmftestinit 00:22:01.357 00:55:53 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:01.357 00:55:53 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:01.357 00:55:53 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:01.357 00:55:53 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:01.357 00:55:53 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:01.357 00:55:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:01.357 00:55:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:01.357 00:55:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:01.357 00:55:53 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:01.357 00:55:53 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:01.357 00:55:53 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:01.357 00:55:53 -- common/autotest_common.sh@10 -- # set +x 00:22:06.629 00:55:59 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:06.629 00:55:59 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.629 00:55:59 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.629 00:55:59 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.629 00:55:59 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.629 00:55:59 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.629 00:55:59 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.629 00:55:59 -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.629 00:55:59 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.629 00:55:59 -- nvmf/common.sh@296 -- # e810=() 00:22:06.630 00:55:59 -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.630 00:55:59 -- nvmf/common.sh@297 -- # x722=() 00:22:06.630 00:55:59 -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.630 00:55:59 -- nvmf/common.sh@298 -- # mlx=() 00:22:06.630 00:55:59 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.630 00:55:59 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.630 00:55:59 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.630 00:55:59 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.630 00:55:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:06.630 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:06.630 00:55:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.630 00:55:59 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:06.630 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:06.630 00:55:59 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.630 00:55:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.630 00:55:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.630 00:55:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:06.630 Found net devices under 0000:27:00.0: cvl_0_0 00:22:06.630 00:55:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.630 00:55:59 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.630 00:55:59 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.630 00:55:59 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.630 00:55:59 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:06.630 Found net devices under 0000:27:00.1: cvl_0_1 00:22:06.630 00:55:59 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.630 00:55:59 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:06.630 00:55:59 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:06.630 00:55:59 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:06.630 00:55:59 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:06.630 00:55:59 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:06.630 00:55:59 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.630 00:55:59 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:06.630 00:55:59 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:06.630 00:55:59 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:06.630 00:55:59 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:06.630 00:55:59 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:06.630 00:55:59 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:06.630 00:55:59 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:06.630 00:55:59 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:06.630 00:55:59 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:06.630 00:55:59 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:06.630 00:55:59 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:06.630 00:55:59 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:06.630 00:55:59 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:06.630 00:55:59 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:06.630 00:55:59 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:06.890 00:55:59 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:06.890 00:55:59 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:06.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:06.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:22:06.890 00:22:06.890 --- 10.0.0.2 ping statistics --- 00:22:06.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.890 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:22:06.890 00:55:59 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:06.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:06.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:22:06.890 00:22:06.890 --- 10.0.0.1 ping statistics --- 00:22:06.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:06.890 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:22:06.890 00:55:59 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:06.890 00:55:59 -- nvmf/common.sh@411 -- # return 0 00:22:06.890 00:55:59 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:06.890 00:55:59 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:06.890 00:55:59 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:06.890 00:55:59 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:06.890 00:55:59 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:06.890 00:55:59 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:06.890 00:55:59 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:06.890 00:55:59 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:06.890 00:55:59 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:06.890 00:55:59 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:06.890 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.890 00:55:59 -- nvmf/common.sh@470 -- # nvmfpid=2842604 00:22:06.890 00:55:59 -- nvmf/common.sh@471 -- # waitforlisten 2842604 00:22:06.890 00:55:59 -- common/autotest_common.sh@817 -- # '[' -z 2842604 ']' 00:22:06.890 00:55:59 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:06.890 00:55:59 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:06.890 00:55:59 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:06.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:06.890 00:55:59 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:06.890 00:55:59 -- common/autotest_common.sh@10 -- # set +x 00:22:06.890 00:55:59 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:06.890 [2024-04-27 00:55:59.452368] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:06.890 [2024-04-27 00:55:59.452477] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:06.890 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.151 [2024-04-27 00:55:59.599561] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:07.151 [2024-04-27 00:55:59.741371] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.151 [2024-04-27 00:55:59.741426] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.151 [2024-04-27 00:55:59.741442] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.151 [2024-04-27 00:55:59.741458] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.151 [2024-04-27 00:55:59.741470] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.151 [2024-04-27 00:55:59.741640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.151 [2024-04-27 00:55:59.741751] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.151 [2024-04-27 00:55:59.741756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.722 00:56:00 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:07.722 00:56:00 -- common/autotest_common.sh@850 -- # return 0 00:22:07.722 00:56:00 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:07.722 00:56:00 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:07.722 00:56:00 -- common/autotest_common.sh@10 -- # set +x 00:22:07.722 00:56:00 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:07.722 00:56:00 -- host/failover.sh@22 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:07.722 [2024-04-27 00:56:00.337992] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:07.722 00:56:00 -- host/failover.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:08.048 Malloc0 00:22:08.048 00:56:00 -- host/failover.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:08.359 00:56:00 -- host/failover.sh@25 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.359 00:56:00 -- host/failover.sh@26 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.359 [2024-04-27 00:56:01.030255] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.619 00:56:01 -- host/failover.sh@27 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:08.619 [2024-04-27 00:56:01.186357] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:08.619 00:56:01 -- host/failover.sh@28 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:08.880 [2024-04-27 00:56:01.338531] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:08.880 00:56:01 -- host/failover.sh@31 -- # bdevperf_pid=2842993 00:22:08.880 00:56:01 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:08.880 00:56:01 -- host/failover.sh@34 -- # waitforlisten 2842993 /var/tmp/bdevperf.sock 00:22:08.880 00:56:01 -- host/failover.sh@30 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:08.880 00:56:01 -- common/autotest_common.sh@817 -- # '[' -z 2842993 ']' 00:22:08.880 00:56:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:08.880 00:56:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:08.880 00:56:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:08.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:08.880 00:56:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:08.880 00:56:01 -- common/autotest_common.sh@10 -- # set +x 00:22:09.818 00:56:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:09.818 00:56:02 -- common/autotest_common.sh@850 -- # return 0 00:22:09.818 00:56:02 -- host/failover.sh@35 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:09.818 NVMe0n1 00:22:09.818 00:56:02 -- host/failover.sh@36 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:10.076 00:22:10.076 00:56:02 -- host/failover.sh@39 -- # run_test_pid=2843295 00:22:10.076 00:56:02 -- host/failover.sh@41 -- # sleep 1 00:22:10.076 00:56:02 -- host/failover.sh@38 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.455 00:56:03 -- host/failover.sh@43 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:11.455 [2024-04-27 00:56:03.886976] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887036] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887045] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887052] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887059] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887066] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887073] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887080] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887087] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887094] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887101] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887109] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887116] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887122] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887130] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887136] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887143] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887150] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887157] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887164] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.455 [2024-04-27 00:56:03.887171] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887184] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887191] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887203] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887210] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887217] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887228] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887235] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887242] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887250] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 [2024-04-27 00:56:03.887257] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000002880 is same with the state(5) to be set 00:22:11.456 00:56:03 -- host/failover.sh@45 -- # sleep 3 00:22:14.747 00:56:06 -- host/failover.sh@47 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:14.747 00:22:14.747 00:56:07 -- host/failover.sh@48 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:14.747 [2024-04-27 00:56:07.299514] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299582] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299591] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299599] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299607] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299614] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 [2024-04-27 00:56:07.299621] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003080 is same with the state(5) to be set 00:22:14.747 00:56:07 -- host/failover.sh@50 -- # sleep 3 00:22:18.040 00:56:10 -- host/failover.sh@53 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:18.040 [2024-04-27 00:56:10.449564] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:18.040 00:56:10 -- host/failover.sh@55 -- # sleep 1 00:22:19.002 00:56:11 -- host/failover.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:19.002 [2024-04-27 00:56:11.624089] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624153] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624162] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624170] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624177] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624190] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624197] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624204] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624212] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624219] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624232] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624239] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624246] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624253] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624260] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624267] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624273] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624280] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624287] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624294] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624302] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624309] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624316] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624323] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624330] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624336] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624343] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624350] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 [2024-04-27 00:56:11.624357] tcp.c:1587:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x618000003c80 is same with the state(5) to be set 00:22:19.002 00:56:11 -- host/failover.sh@59 -- # wait 2843295 00:22:25.593 0 00:22:25.593 00:56:17 -- host/failover.sh@61 -- # killprocess 2842993 00:22:25.593 00:56:17 -- common/autotest_common.sh@936 -- # '[' -z 2842993 ']' 00:22:25.593 00:56:17 -- common/autotest_common.sh@940 -- # kill -0 2842993 00:22:25.593 00:56:17 -- common/autotest_common.sh@941 -- # uname 00:22:25.593 00:56:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:25.593 00:56:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2842993 00:22:25.593 00:56:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:25.593 00:56:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:25.593 00:56:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2842993' 00:22:25.593 killing process with pid 2842993 00:22:25.593 00:56:17 -- common/autotest_common.sh@955 -- # kill 2842993 00:22:25.593 00:56:17 -- common/autotest_common.sh@960 -- # wait 2842993 00:22:25.593 00:56:18 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:25.593 [2024-04-27 00:56:01.444037] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:25.593 [2024-04-27 00:56:01.444273] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2842993 ] 00:22:25.593 EAL: No free 2048 kB hugepages reported on node 1 00:22:25.593 [2024-04-27 00:56:01.573309] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:25.593 [2024-04-27 00:56:01.664505] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:25.593 Running I/O for 15 seconds... 00:22:25.593 [2024-04-27 00:56:03.888634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:98456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:98464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:98472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:98480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.888988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.888998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.889016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.889035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:98696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.593 [2024-04-27 00:56:03.889055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:98704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.593 [2024-04-27 00:56:03.889074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.889092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.593 [2024-04-27 00:56:03.889109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.593 [2024-04-27 00:56:03.889119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.594 [2024-04-27 00:56:03.889342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:98712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:98720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:98728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:98736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:98744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:98752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:98760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:98768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:98776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:98784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:98792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:98800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:98808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:98816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:98824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:98832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:98840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:98848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:98856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:98864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:98872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:98880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:98888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:98896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:98904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:98912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.594 [2024-04-27 00:56:03.889785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.594 [2024-04-27 00:56:03.889794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:98920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:98928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:98936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:98944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:98952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:98960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:98976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:98984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:99000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.889983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:99008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.889990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:99040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:99048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:99072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:99088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.595 [2024-04-27 00:56:03.890163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99096 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890243] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99104 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890277] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99112 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99120 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99128 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890357] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99136 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890386] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99144 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890414] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890420] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99152 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.595 [2024-04-27 00:56:03.890444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.595 [2024-04-27 00:56:03.890451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.595 [2024-04-27 00:56:03.890457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99160 len:8 PRP1 0x0 PRP2 0x0 00:22:25.595 [2024-04-27 00:56:03.890465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890473] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890479] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99168 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890504] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890509] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99176 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890532] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99184 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890561] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99192 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890589] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890595] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99200 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890617] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890623] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99208 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890645] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890651] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99216 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890679] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99224 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890709] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99232 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99240 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890765] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99248 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890787] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890793] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99256 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890815] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99264 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99272 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890870] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890876] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99280 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890899] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890905] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99288 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890934] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99296 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890956] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890962] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99304 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.890977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.890984] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.890990] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.890997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99312 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.891012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.891018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.891025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99320 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.891033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.891040] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.891046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.891053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99328 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.891060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.596 [2024-04-27 00:56:03.891068] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.596 [2024-04-27 00:56:03.891074] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.596 [2024-04-27 00:56:03.891080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99336 len:8 PRP1 0x0 PRP2 0x0 00:22:25.596 [2024-04-27 00:56:03.891088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891101] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891107] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99344 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99352 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99360 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891192] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99368 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99376 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99384 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891274] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99392 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891302] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891308] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99400 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891335] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99408 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891359] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891365] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99416 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891389] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891395] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99424 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891416] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891423] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99432 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891445] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.597 [2024-04-27 00:56:03.891451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.597 [2024-04-27 00:56:03.891457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99440 len:8 PRP1 0x0 PRP2 0x0 00:22:25.597 [2024-04-27 00:56:03.891466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891580] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000007240 was disconnected and freed. reset controller. 00:22:25.597 [2024-04-27 00:56:03.891594] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:25.597 [2024-04-27 00:56:03.891631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.597 [2024-04-27 00:56:03.891641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.597 [2024-04-27 00:56:03.891660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.597 [2024-04-27 00:56:03.891676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.597 [2024-04-27 00:56:03.891692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:03.891701] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.597 [2024-04-27 00:56:03.891755] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:25.597 [2024-04-27 00:56:03.894286] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.597 [2024-04-27 00:56:03.926498] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.597 [2024-04-27 00:56:07.299734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.597 [2024-04-27 00:56:07.299792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:07.299826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.597 [2024-04-27 00:56:07.299835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:07.299846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.597 [2024-04-27 00:56:07.299854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.597 [2024-04-27 00:56:07.299863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:29944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:29960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:29976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:29992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.299990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.299999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:30008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.300007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.300173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:30024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.598 [2024-04-27 00:56:07.300396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.598 [2024-04-27 00:56:07.300547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.598 [2024-04-27 00:56:07.300555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.599 [2024-04-27 00:56:07.300817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.300990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.300997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.599 [2024-04-27 00:56:07.301253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.599 [2024-04-27 00:56:07.301260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:29672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:29680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:30072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.600 [2024-04-27 00:56:07.301664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:29760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:29784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:29840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:29856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.600 [2024-04-27 00:56:07.301970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.600 [2024-04-27 00:56:07.301978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.301988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:29872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:07.301995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:29880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:07.302013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:29888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:07.302030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:07.302046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:07.302065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302075] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000008040 is same with the state(5) to be set 00:22:25.601 [2024-04-27 00:56:07.302089] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.601 [2024-04-27 00:56:07.302098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.601 [2024-04-27 00:56:07.302108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29912 len:8 PRP1 0x0 PRP2 0x0 00:22:25.601 [2024-04-27 00:56:07.302117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302253] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000008040 was disconnected and freed. reset controller. 00:22:25.601 [2024-04-27 00:56:07.302267] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:25.601 [2024-04-27 00:56:07.302299] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.601 [2024-04-27 00:56:07.302309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302320] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.601 [2024-04-27 00:56:07.302328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.601 [2024-04-27 00:56:07.302345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.601 [2024-04-27 00:56:07.302363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:07.302371] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.601 [2024-04-27 00:56:07.304963] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.601 [2024-04-27 00:56:07.304999] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:25.601 [2024-04-27 00:56:07.375273] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.601 [2024-04-27 00:56:11.624459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:59576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:59584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:59592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:59600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:59608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:59616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:59624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:60520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.601 [2024-04-27 00:56:11.624669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:59640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:59648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:59656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:59664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:59672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:59680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:59688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:59696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:59704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:59712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:59720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:59728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:59736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:59744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:59752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.601 [2024-04-27 00:56:11.624957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:60528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.601 [2024-04-27 00:56:11.624974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.624984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:60536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.601 [2024-04-27 00:56:11.624991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.601 [2024-04-27 00:56:11.625001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:60544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:60552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:60560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:60568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:60576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:60584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:60592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:25.602 [2024-04-27 00:56:11.625116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:59760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:59768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:59776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:59784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:59792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:59800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:59808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:59816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:59824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:59832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:59840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:59848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:59856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:59864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:59872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:59880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:59888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:59896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:59904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:59912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:59920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:59928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:59936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:59944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:59952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:59960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:59968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.602 [2024-04-27 00:56:11.625584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.602 [2024-04-27 00:56:11.625594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:59976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:59984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:59992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:60000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:60008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:60016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:60024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:60032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:60040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:60048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:60056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:60064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:60072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:60080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:60088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:60096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:60104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:60112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:60120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:60128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:60136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:60144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:60152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.625987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:60160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.625997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:60168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:60176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:60184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:60192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:60200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:60208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:60216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:60224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:60232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:60240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:60248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:60256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:60272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:60280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:60288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.603 [2024-04-27 00:56:11.626328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:60296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.603 [2024-04-27 00:56:11.626336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:60304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:60312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:60320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:60328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:60336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:60344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:60352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:60360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:60368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:60376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:60384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:60392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:60400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:60408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:60416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:60440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:60448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:60456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:60464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:60472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:60480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:60496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:60504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:25.604 [2024-04-27 00:56:11.626784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626794] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000009240 is same with the state(5) to be set 00:22:25.604 [2024-04-27 00:56:11.626808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:25.604 [2024-04-27 00:56:11.626816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:25.604 [2024-04-27 00:56:11.626825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:60512 len:8 PRP1 0x0 PRP2 0x0 00:22:25.604 [2024-04-27 00:56:11.626835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.626958] bdev_nvme.c:1601:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x614000009240 was disconnected and freed. reset controller. 00:22:25.604 [2024-04-27 00:56:11.626972] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:25.604 [2024-04-27 00:56:11.627002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.604 [2024-04-27 00:56:11.627012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.627023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.604 [2024-04-27 00:56:11.627030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.627040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.604 [2024-04-27 00:56:11.627047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.627055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:25.604 [2024-04-27 00:56:11.627063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:25.604 [2024-04-27 00:56:11.627071] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:25.604 [2024-04-27 00:56:11.629683] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:25.604 [2024-04-27 00:56:11.629719] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:25.604 [2024-04-27 00:56:11.793316] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:25.604 00:22:25.604 Latency(us) 00:22:25.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.604 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:25.604 Verification LBA range: start 0x0 length 0x4000 00:22:25.604 NVMe0n1 : 15.01 11320.76 44.22 937.40 0.00 10422.31 538.95 13176.19 00:22:25.604 =================================================================================================================== 00:22:25.604 Total : 11320.76 44.22 937.40 0.00 10422.31 538.95 13176.19 00:22:25.604 Received shutdown signal, test time was about 15.000000 seconds 00:22:25.604 00:22:25.604 Latency(us) 00:22:25.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:25.604 =================================================================================================================== 00:22:25.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:25.604 00:56:18 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:25.604 00:56:18 -- host/failover.sh@65 -- # count=3 00:22:25.604 00:56:18 -- host/failover.sh@67 -- # (( count != 3 )) 00:22:25.604 00:56:18 -- host/failover.sh@73 -- # bdevperf_pid=2846282 00:22:25.605 00:56:18 -- host/failover.sh@75 -- # waitforlisten 2846282 /var/tmp/bdevperf.sock 00:22:25.605 00:56:18 -- common/autotest_common.sh@817 -- # '[' -z 2846282 ']' 00:22:25.605 00:56:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:25.605 00:56:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:25.605 00:56:18 -- host/failover.sh@72 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:25.605 00:56:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:25.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:25.605 00:56:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:25.605 00:56:18 -- common/autotest_common.sh@10 -- # set +x 00:22:26.543 00:56:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:26.543 00:56:19 -- common/autotest_common.sh@850 -- # return 0 00:22:26.543 00:56:19 -- host/failover.sh@76 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:26.543 [2024-04-27 00:56:19.175726] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:26.543 00:56:19 -- host/failover.sh@77 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:26.800 [2024-04-27 00:56:19.339787] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:26.800 00:56:19 -- host/failover.sh@78 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.058 NVMe0n1 00:22:27.058 00:56:19 -- host/failover.sh@79 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.623 00:22:27.623 00:56:20 -- host/failover.sh@80 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:27.881 00:22:27.881 00:56:20 -- host/failover.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:27.881 00:56:20 -- host/failover.sh@82 -- # grep -q NVMe0 00:22:27.881 00:56:20 -- host/failover.sh@84 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:28.139 00:56:20 -- host/failover.sh@87 -- # sleep 3 00:22:31.423 00:56:23 -- host/failover.sh@88 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:31.423 00:56:23 -- host/failover.sh@88 -- # grep -q NVMe0 00:22:31.423 00:56:23 -- host/failover.sh@90 -- # run_test_pid=2847292 00:22:31.423 00:56:23 -- host/failover.sh@92 -- # wait 2847292 00:22:31.423 00:56:23 -- host/failover.sh@89 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:32.361 0 00:22:32.361 00:56:24 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:32.361 [2024-04-27 00:56:18.317507] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:32.361 [2024-04-27 00:56:18.317632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846282 ] 00:22:32.361 EAL: No free 2048 kB hugepages reported on node 1 00:22:32.361 [2024-04-27 00:56:18.439449] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:32.361 [2024-04-27 00:56:18.528519] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:32.361 [2024-04-27 00:56:20.652592] bdev_nvme.c:1857:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:32.361 [2024-04-27 00:56:20.652676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.361 [2024-04-27 00:56:20.652695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.361 [2024-04-27 00:56:20.652711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.361 [2024-04-27 00:56:20.652722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.361 [2024-04-27 00:56:20.652732] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.361 [2024-04-27 00:56:20.652742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.361 [2024-04-27 00:56:20.652752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:32.361 [2024-04-27 00:56:20.652762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:32.361 [2024-04-27 00:56:20.652772] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:22:32.361 [2024-04-27 00:56:20.652822] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:32.361 [2024-04-27 00:56:20.652851] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000004a40 (9): Bad file descriptor 00:22:32.361 [2024-04-27 00:56:20.664996] bdev_nvme.c:2054:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:32.361 Running I/O for 1 seconds... 00:22:32.361 00:22:32.361 Latency(us) 00:22:32.361 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:32.361 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:32.361 Verification LBA range: start 0x0 length 0x4000 00:22:32.361 NVMe0n1 : 1.01 11455.33 44.75 0.00 0.00 11134.01 1440.07 17039.36 00:22:32.361 =================================================================================================================== 00:22:32.361 Total : 11455.33 44.75 0.00 0.00 11134.01 1440.07 17039.36 00:22:32.361 00:56:24 -- host/failover.sh@95 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.361 00:56:24 -- host/failover.sh@95 -- # grep -q NVMe0 00:22:32.619 00:56:25 -- host/failover.sh@98 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.619 00:56:25 -- host/failover.sh@99 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:32.619 00:56:25 -- host/failover.sh@99 -- # grep -q NVMe0 00:22:32.876 00:56:25 -- host/failover.sh@100 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:32.876 00:56:25 -- host/failover.sh@101 -- # sleep 3 00:22:36.165 00:56:28 -- host/failover.sh@103 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:36.165 00:56:28 -- host/failover.sh@103 -- # grep -q NVMe0 00:22:36.165 00:56:28 -- host/failover.sh@108 -- # killprocess 2846282 00:22:36.165 00:56:28 -- common/autotest_common.sh@936 -- # '[' -z 2846282 ']' 00:22:36.165 00:56:28 -- common/autotest_common.sh@940 -- # kill -0 2846282 00:22:36.165 00:56:28 -- common/autotest_common.sh@941 -- # uname 00:22:36.165 00:56:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.165 00:56:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2846282 00:22:36.165 00:56:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:36.166 00:56:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:36.166 00:56:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2846282' 00:22:36.166 killing process with pid 2846282 00:22:36.166 00:56:28 -- common/autotest_common.sh@955 -- # kill 2846282 00:22:36.166 00:56:28 -- common/autotest_common.sh@960 -- # wait 2846282 00:22:36.424 00:56:29 -- host/failover.sh@110 -- # sync 00:22:36.424 00:56:29 -- host/failover.sh@111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:36.683 00:56:29 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:22:36.683 00:56:29 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:36.683 00:56:29 -- host/failover.sh@116 -- # nvmftestfini 00:22:36.683 00:56:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:36.683 00:56:29 -- nvmf/common.sh@117 -- # sync 00:22:36.683 00:56:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:36.683 00:56:29 -- nvmf/common.sh@120 -- # set +e 00:22:36.683 00:56:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:36.683 00:56:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:36.683 rmmod nvme_tcp 00:22:36.683 rmmod nvme_fabrics 00:22:36.683 rmmod nvme_keyring 00:22:36.683 00:56:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:36.683 00:56:29 -- nvmf/common.sh@124 -- # set -e 00:22:36.683 00:56:29 -- nvmf/common.sh@125 -- # return 0 00:22:36.683 00:56:29 -- nvmf/common.sh@478 -- # '[' -n 2842604 ']' 00:22:36.683 00:56:29 -- nvmf/common.sh@479 -- # killprocess 2842604 00:22:36.683 00:56:29 -- common/autotest_common.sh@936 -- # '[' -z 2842604 ']' 00:22:36.683 00:56:29 -- common/autotest_common.sh@940 -- # kill -0 2842604 00:22:36.683 00:56:29 -- common/autotest_common.sh@941 -- # uname 00:22:36.683 00:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:36.683 00:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2842604 00:22:36.683 00:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:36.683 00:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:36.683 00:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2842604' 00:22:36.683 killing process with pid 2842604 00:22:36.683 00:56:29 -- common/autotest_common.sh@955 -- # kill 2842604 00:22:36.683 00:56:29 -- common/autotest_common.sh@960 -- # wait 2842604 00:22:37.283 00:56:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:37.283 00:56:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:37.283 00:56:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:37.283 00:56:29 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:37.283 00:56:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:37.283 00:56:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:37.283 00:56:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:37.283 00:56:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.821 00:56:31 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:39.821 00:22:39.821 real 0m38.098s 00:22:39.821 user 2m1.608s 00:22:39.821 sys 0m6.868s 00:22:39.821 00:56:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:39.821 00:56:31 -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 ************************************ 00:22:39.821 END TEST nvmf_failover 00:22:39.821 ************************************ 00:22:39.821 00:56:32 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:39.821 00:56:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:39.821 00:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:39.821 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:39.821 ************************************ 00:22:39.821 START TEST nvmf_discovery 00:22:39.821 ************************************ 00:22:39.821 00:56:32 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:22:39.821 * Looking for test storage... 00:22:39.821 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:39.821 00:56:32 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:39.821 00:56:32 -- nvmf/common.sh@7 -- # uname -s 00:22:39.821 00:56:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:39.821 00:56:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:39.821 00:56:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:39.821 00:56:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:39.821 00:56:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:39.821 00:56:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:39.821 00:56:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:39.821 00:56:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:39.821 00:56:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:39.821 00:56:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:39.821 00:56:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:22:39.821 00:56:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:22:39.821 00:56:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:39.821 00:56:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:39.821 00:56:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:39.821 00:56:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:39.821 00:56:32 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:39.821 00:56:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:39.821 00:56:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:39.821 00:56:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:39.821 00:56:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.821 00:56:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.821 00:56:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.821 00:56:32 -- paths/export.sh@5 -- # export PATH 00:22:39.821 00:56:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:39.821 00:56:32 -- nvmf/common.sh@47 -- # : 0 00:22:39.821 00:56:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:39.821 00:56:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:39.821 00:56:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:39.821 00:56:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:39.821 00:56:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:39.821 00:56:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:39.821 00:56:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:39.821 00:56:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:39.821 00:56:32 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:22:39.821 00:56:32 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:22:39.821 00:56:32 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:22:39.821 00:56:32 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:22:39.821 00:56:32 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:22:39.821 00:56:32 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:22:39.821 00:56:32 -- host/discovery.sh@25 -- # nvmftestinit 00:22:39.821 00:56:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:39.821 00:56:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:39.821 00:56:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:39.821 00:56:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:39.821 00:56:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:39.821 00:56:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:39.821 00:56:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:39.821 00:56:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:39.821 00:56:32 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:39.821 00:56:32 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:39.821 00:56:32 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:39.821 00:56:32 -- common/autotest_common.sh@10 -- # set +x 00:22:46.397 00:56:37 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:46.397 00:56:37 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.397 00:56:37 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.397 00:56:37 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.397 00:56:37 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.397 00:56:37 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.397 00:56:37 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.397 00:56:37 -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.397 00:56:37 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.397 00:56:37 -- nvmf/common.sh@296 -- # e810=() 00:22:46.397 00:56:37 -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.397 00:56:37 -- nvmf/common.sh@297 -- # x722=() 00:22:46.397 00:56:37 -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.397 00:56:37 -- nvmf/common.sh@298 -- # mlx=() 00:22:46.397 00:56:37 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.397 00:56:37 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.397 00:56:37 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.397 00:56:37 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.397 00:56:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:22:46.397 Found 0000:27:00.0 (0x8086 - 0x159b) 00:22:46.397 00:56:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.397 00:56:37 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:22:46.397 Found 0000:27:00.1 (0x8086 - 0x159b) 00:22:46.397 00:56:37 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.397 00:56:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.397 00:56:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.397 00:56:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:22:46.397 Found net devices under 0000:27:00.0: cvl_0_0 00:22:46.397 00:56:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.397 00:56:37 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.397 00:56:37 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.397 00:56:37 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.397 00:56:37 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:22:46.397 Found net devices under 0000:27:00.1: cvl_0_1 00:22:46.397 00:56:37 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.397 00:56:37 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@403 -- # is_hw=yes 00:22:46.397 00:56:37 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:22:46.397 00:56:37 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:22:46.397 00:56:37 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:46.397 00:56:37 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:46.397 00:56:37 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.397 00:56:37 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:46.397 00:56:37 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:46.397 00:56:37 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:46.397 00:56:37 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:46.397 00:56:37 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:46.397 00:56:37 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:46.397 00:56:37 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:46.397 00:56:37 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:46.397 00:56:37 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:46.397 00:56:37 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:46.397 00:56:38 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:46.397 00:56:38 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:46.397 00:56:38 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:46.397 00:56:38 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:46.397 00:56:38 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:46.397 00:56:38 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:46.397 00:56:38 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:46.397 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:46.397 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:22:46.397 00:22:46.397 --- 10.0.0.2 ping statistics --- 00:22:46.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.397 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:22:46.397 00:56:38 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:46.397 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:46.397 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.278 ms 00:22:46.397 00:22:46.397 --- 10.0.0.1 ping statistics --- 00:22:46.397 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:46.397 rtt min/avg/max/mdev = 0.278/0.278/0.278/0.000 ms 00:22:46.397 00:56:38 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:46.397 00:56:38 -- nvmf/common.sh@411 -- # return 0 00:22:46.397 00:56:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:46.397 00:56:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:46.397 00:56:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:46.397 00:56:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:46.397 00:56:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:46.397 00:56:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:46.397 00:56:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:46.397 00:56:38 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:22:46.398 00:56:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:46.398 00:56:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:46.398 00:56:38 -- common/autotest_common.sh@10 -- # set +x 00:22:46.398 00:56:38 -- nvmf/common.sh@470 -- # nvmfpid=2852572 00:22:46.398 00:56:38 -- nvmf/common.sh@471 -- # waitforlisten 2852572 00:22:46.398 00:56:38 -- common/autotest_common.sh@817 -- # '[' -z 2852572 ']' 00:22:46.398 00:56:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:46.398 00:56:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:46.398 00:56:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:46.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:46.398 00:56:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:46.398 00:56:38 -- common/autotest_common.sh@10 -- # set +x 00:22:46.398 00:56:38 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:46.398 [2024-04-27 00:56:38.319813] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:46.398 [2024-04-27 00:56:38.319942] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:46.398 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.398 [2024-04-27 00:56:38.486589] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.398 [2024-04-27 00:56:38.665796] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:46.398 [2024-04-27 00:56:38.665864] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:46.398 [2024-04-27 00:56:38.665881] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:46.398 [2024-04-27 00:56:38.665898] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:46.398 [2024-04-27 00:56:38.665912] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:46.398 [2024-04-27 00:56:38.665960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:46.398 00:56:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.398 00:56:39 -- common/autotest_common.sh@850 -- # return 0 00:22:46.398 00:56:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:46.398 00:56:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:46.398 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.398 00:56:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.398 00:56:39 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:46.398 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.398 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.398 [2024-04-27 00:56:39.074854] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:46.398 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.398 00:56:39 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:22:46.398 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.398 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.398 [2024-04-27 00:56:39.087104] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:46.398 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.398 00:56:39 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:22:46.398 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.398 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.656 null0 00:22:46.656 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.656 00:56:39 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:22:46.656 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.656 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.656 null1 00:22:46.656 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.656 00:56:39 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:22:46.656 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:46.656 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.656 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:46.656 00:56:39 -- host/discovery.sh@45 -- # hostpid=2852610 00:22:46.656 00:56:39 -- host/discovery.sh@46 -- # waitforlisten 2852610 /tmp/host.sock 00:22:46.656 00:56:39 -- common/autotest_common.sh@817 -- # '[' -z 2852610 ']' 00:22:46.656 00:56:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:22:46.656 00:56:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:46.656 00:56:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:46.656 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:46.656 00:56:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:46.656 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:46.656 00:56:39 -- host/discovery.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:22:46.656 [2024-04-27 00:56:39.191372] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:22:46.656 [2024-04-27 00:56:39.191476] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2852610 ] 00:22:46.656 EAL: No free 2048 kB hugepages reported on node 1 00:22:46.656 [2024-04-27 00:56:39.303914] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:46.916 [2024-04-27 00:56:39.396136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:47.486 00:56:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:47.486 00:56:39 -- common/autotest_common.sh@850 -- # return 0 00:22:47.486 00:56:39 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:47.486 00:56:39 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:22:47.486 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:39 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:22:47.486 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:39 -- host/discovery.sh@72 -- # notify_id=0 00:22:47.486 00:56:39 -- host/discovery.sh@83 -- # get_subsystem_names 00:22:47.486 00:56:39 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.486 00:56:39 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.486 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:39 -- host/discovery.sh@59 -- # sort 00:22:47.486 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:39 -- host/discovery.sh@59 -- # xargs 00:22:47.486 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:39 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:22:47.486 00:56:39 -- host/discovery.sh@84 -- # get_bdev_list 00:22:47.486 00:56:39 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.486 00:56:39 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.486 00:56:39 -- host/discovery.sh@55 -- # sort 00:22:47.486 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:39 -- host/discovery.sh@55 -- # xargs 00:22:47.486 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:39 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:22:47.486 00:56:39 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:22:47.486 00:56:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:39 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@87 -- # get_subsystem_names 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.486 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # xargs 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # sort 00:22:47.486 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:22:47.486 00:56:40 -- host/discovery.sh@88 -- # get_bdev_list 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.486 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # sort 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # xargs 00:22:47.486 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:22:47.486 00:56:40 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:22:47.486 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@91 -- # get_subsystem_names 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.486 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # sort 00:22:47.486 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:40 -- host/discovery.sh@59 -- # xargs 00:22:47.486 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:22:47.486 00:56:40 -- host/discovery.sh@92 -- # get_bdev_list 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # sort 00:22:47.486 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.486 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.486 00:56:40 -- host/discovery.sh@55 -- # xargs 00:22:47.486 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.486 00:56:40 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:22:47.748 00:56:40 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 [2024-04-27 00:56:40.187605] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- host/discovery.sh@97 -- # get_subsystem_names 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # sort 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # xargs 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:22:47.748 00:56:40 -- host/discovery.sh@98 -- # get_bdev_list 00:22:47.748 00:56:40 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:47.748 00:56:40 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 00:56:40 -- host/discovery.sh@55 -- # sort 00:22:47.748 00:56:40 -- host/discovery.sh@55 -- # xargs 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:22:47.748 00:56:40 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:22:47.748 00:56:40 -- host/discovery.sh@79 -- # expected_count=0 00:22:47.748 00:56:40 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:47.748 00:56:40 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:47.748 00:56:40 -- common/autotest_common.sh@901 -- # local max=10 00:22:47.748 00:56:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:47.748 00:56:40 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:47.748 00:56:40 -- host/discovery.sh@74 -- # jq '. | length' 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- host/discovery.sh@74 -- # notification_count=0 00:22:47.748 00:56:40 -- host/discovery.sh@75 -- # notify_id=0 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:47.748 00:56:40 -- common/autotest_common.sh@904 -- # return 0 00:22:47.748 00:56:40 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.748 00:56:40 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:47.748 00:56:40 -- common/autotest_common.sh@901 -- # local max=10 00:22:47.748 00:56:40 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:47.748 00:56:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # sort 00:22:47.748 00:56:40 -- common/autotest_common.sh@10 -- # set +x 00:22:47.748 00:56:40 -- host/discovery.sh@59 -- # xargs 00:22:47.748 00:56:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:47.748 00:56:40 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:22:47.748 00:56:40 -- common/autotest_common.sh@906 -- # sleep 1 00:22:48.315 [2024-04-27 00:56:40.953094] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:48.315 [2024-04-27 00:56:40.953125] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:48.315 [2024-04-27 00:56:40.953146] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:48.573 [2024-04-27 00:56:41.084235] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:48.573 [2024-04-27 00:56:41.265291] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:48.573 [2024-04-27 00:56:41.265320] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:48.833 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:48.833 00:56:41 -- host/discovery.sh@59 -- # xargs 00:22:48.833 00:56:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:48.833 00:56:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:48.833 00:56:41 -- host/discovery.sh@59 -- # sort 00:22:48.833 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.833 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:48.833 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:48.833 00:56:41 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:48.833 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:48.833 00:56:41 -- host/discovery.sh@55 -- # xargs 00:22:48.833 00:56:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.833 00:56:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:48.833 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.833 00:56:41 -- host/discovery.sh@55 -- # sort 00:22:48.833 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:48.833 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:48.833 00:56:41 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:48.833 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:48.833 00:56:41 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:48.833 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.833 00:56:41 -- host/discovery.sh@63 -- # sort -n 00:22:48.833 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:48.833 00:56:41 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:48.833 00:56:41 -- host/discovery.sh@63 -- # xargs 00:22:48.833 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:22:48.833 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:48.833 00:56:41 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:22:48.833 00:56:41 -- host/discovery.sh@79 -- # expected_count=1 00:22:48.833 00:56:41 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:48.833 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:48.833 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:48.833 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:48.833 00:56:41 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:48.833 00:56:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:22:48.833 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:48.833 00:56:41 -- host/discovery.sh@74 -- # jq '. | length' 00:22:48.833 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:48.833 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- host/discovery.sh@74 -- # notification_count=1 00:22:49.094 00:56:41 -- host/discovery.sh@75 -- # notify_id=1 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:49.094 00:56:41 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:22:49.094 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.094 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:49.094 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # xargs 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.094 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.094 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # sort 00:22:49.094 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:49.094 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:49.094 00:56:41 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:22:49.094 00:56:41 -- host/discovery.sh@79 -- # expected_count=1 00:22:49.094 00:56:41 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:49.094 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:49.094 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:49.094 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:49.094 00:56:41 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:22:49.094 00:56:41 -- host/discovery.sh@74 -- # jq '. | length' 00:22:49.094 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.094 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- host/discovery.sh@74 -- # notification_count=1 00:22:49.094 00:56:41 -- host/discovery.sh@75 -- # notify_id=2 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:49.094 00:56:41 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:22:49.094 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.094 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 [2024-04-27 00:56:41.624080] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:49.094 [2024-04-27 00:56:41.624692] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:49.094 [2024-04-27 00:56:41.624738] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:49.094 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:49.094 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:49.094 00:56:41 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:49.094 00:56:41 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:49.094 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.094 00:56:41 -- host/discovery.sh@59 -- # sort 00:22:49.094 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.094 00:56:41 -- host/discovery.sh@59 -- # xargs 00:22:49.094 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:49.094 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:49.094 00:56:41 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:49.094 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:49.094 00:56:41 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.094 00:56:41 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:49.095 00:56:41 -- host/discovery.sh@55 -- # sort 00:22:49.095 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.095 00:56:41 -- host/discovery.sh@55 -- # xargs 00:22:49.095 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.095 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.095 00:56:41 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:49.095 00:56:41 -- common/autotest_common.sh@904 -- # return 0 00:22:49.095 00:56:41 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:49.095 00:56:41 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:22:49.095 00:56:41 -- common/autotest_common.sh@901 -- # local max=10 00:22:49.095 00:56:41 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:49.095 00:56:41 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:49.095 00:56:41 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:49.095 00:56:41 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:49.095 00:56:41 -- host/discovery.sh@63 -- # sort -n 00:22:49.095 00:56:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:49.095 00:56:41 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:49.095 00:56:41 -- common/autotest_common.sh@10 -- # set +x 00:22:49.095 00:56:41 -- host/discovery.sh@63 -- # xargs 00:22:49.095 00:56:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:49.095 [2024-04-27 00:56:41.752802] bdev_nvme.c:6847:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:22:49.095 00:56:41 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:22:49.095 00:56:41 -- common/autotest_common.sh@906 -- # sleep 1 00:22:49.360 [2024-04-27 00:56:41.813406] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:49.360 [2024-04-27 00:56:41.813433] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:49.360 [2024-04-27 00:56:41.813442] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:50.295 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # xargs 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # sort -n 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@904 -- # return 0 00:22:50.295 00:56:42 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:22:50.295 00:56:42 -- host/discovery.sh@79 -- # expected_count=0 00:22:50.295 00:56:42 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.295 00:56:42 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.295 00:56:42 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.295 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:50.295 00:56:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 00:56:42 -- host/discovery.sh@74 -- # notification_count=0 00:22:50.295 00:56:42 -- host/discovery.sh@75 -- # notify_id=2 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@904 -- # return 0 00:22:50.295 00:56:42 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 [2024-04-27 00:56:42.844776] bdev_nvme.c:6905:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:22:50.295 [2024-04-27 00:56:42.844811] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 00:56:42 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.295 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:50.295 [2024-04-27 00:56:42.853511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.295 [2024-04-27 00:56:42.853541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.295 [2024-04-27 00:56:42.853553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.295 [2024-04-27 00:56:42.853562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.295 [2024-04-27 00:56:42.853570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.295 [2024-04-27 00:56:42.853578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.295 [2024-04-27 00:56:42.853586] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:50.295 [2024-04-27 00:56:42.853593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:50.295 [2024-04-27 00:56:42.853602] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 00:56:42 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.295 00:56:42 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.295 00:56:42 -- host/discovery.sh@59 -- # sort 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 00:56:42 -- host/discovery.sh@59 -- # xargs 00:22:50.295 [2024-04-27 00:56:42.863495] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 [2024-04-27 00:56:42.873506] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.873892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.874024] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.874037] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.874047] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.874062] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.874084] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.874092] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.874103] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.874119] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 [2024-04-27 00:56:42.883550] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.883883] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.884114] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.884123] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.884133] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.884145] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.884161] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.884168] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.884176] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.884187] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@904 -- # return 0 00:22:50.295 00:56:42 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.295 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:50.295 [2024-04-27 00:56:42.893587] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.893756] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.893862] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.893872] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.893882] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.893895] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.893907] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.893916] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.893924] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.893937] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 00:56:42 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 00:56:42 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.295 00:56:42 -- host/discovery.sh@55 -- # sort 00:22:50.295 00:56:42 -- host/discovery.sh@55 -- # xargs 00:22:50.295 [2024-04-27 00:56:42.903635] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.903967] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.904177] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.904187] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.904196] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.904209] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.904230] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.904237] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.904245] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.904257] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 [2024-04-27 00:56:42.913676] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.914045] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.914237] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.914249] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.914260] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.914274] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.914293] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.914303] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.914311] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.914322] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 [2024-04-27 00:56:42.923715] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:50.295 [2024-04-27 00:56:42.924081] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.924297] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:50.295 [2024-04-27 00:56:42.924306] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005440 with addr=10.0.0.2, port=4420 00:22:50.295 [2024-04-27 00:56:42.924315] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005440 is same with the state(5) to be set 00:22:50.295 [2024-04-27 00:56:42.924326] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005440 (9): Bad file descriptor 00:22:50.295 [2024-04-27 00:56:42.924341] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:50.295 [2024-04-27 00:56:42.924348] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:50.295 [2024-04-27 00:56:42.924355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:50.295 [2024-04-27 00:56:42.924366] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@904 -- # return 0 00:22:50.295 00:56:42 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:22:50.295 [2024-04-27 00:56:42.932369] bdev_nvme.c:6710:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:22:50.295 00:56:42 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.295 [2024-04-27 00:56:42.932397] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:50.295 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:22:50.295 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.295 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # xargs 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:22:50.295 00:56:42 -- host/discovery.sh@63 -- # sort -n 00:22:50.295 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:22:50.295 00:56:42 -- common/autotest_common.sh@904 -- # return 0 00:22:50.295 00:56:42 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:22:50.296 00:56:42 -- host/discovery.sh@79 -- # expected_count=0 00:22:50.296 00:56:42 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.296 00:56:42 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.296 00:56:42 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.296 00:56:42 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.296 00:56:42 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.296 00:56:42 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:50.296 00:56:42 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:50.296 00:56:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.296 00:56:42 -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.296 00:56:42 -- common/autotest_common.sh@10 -- # set +x 00:22:50.296 00:56:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.554 00:56:43 -- host/discovery.sh@74 -- # notification_count=0 00:22:50.554 00:56:43 -- host/discovery.sh@75 -- # notify_id=2 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:50.554 00:56:43 -- common/autotest_common.sh@904 -- # return 0 00:22:50.554 00:56:43 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:22:50.554 00:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.554 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.554 00:56:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.554 00:56:43 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.554 00:56:43 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:22:50.554 00:56:43 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:22:50.554 00:56:43 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:22:50.554 00:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.554 00:56:43 -- host/discovery.sh@59 -- # xargs 00:22:50.554 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.554 00:56:43 -- host/discovery.sh@59 -- # sort 00:22:50.554 00:56:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:22:50.554 00:56:43 -- common/autotest_common.sh@904 -- # return 0 00:22:50.554 00:56:43 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.554 00:56:43 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # get_bdev_list 00:22:50.554 00:56:43 -- host/discovery.sh@55 -- # xargs 00:22:50.554 00:56:43 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.554 00:56:43 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:50.554 00:56:43 -- host/discovery.sh@55 -- # sort 00:22:50.554 00:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.554 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.554 00:56:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:22:50.554 00:56:43 -- common/autotest_common.sh@904 -- # return 0 00:22:50.554 00:56:43 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:22:50.554 00:56:43 -- host/discovery.sh@79 -- # expected_count=2 00:22:50.554 00:56:43 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:22:50.554 00:56:43 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:22:50.554 00:56:43 -- common/autotest_common.sh@901 -- # local max=10 00:22:50.554 00:56:43 -- common/autotest_common.sh@902 -- # (( max-- )) 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:22:50.554 00:56:43 -- common/autotest_common.sh@903 -- # get_notification_count 00:22:50.554 00:56:43 -- host/discovery.sh@74 -- # jq '. | length' 00:22:50.554 00:56:43 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:22:50.554 00:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.554 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:50.554 00:56:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:50.554 00:56:43 -- host/discovery.sh@74 -- # notification_count=2 00:22:50.554 00:56:43 -- host/discovery.sh@75 -- # notify_id=4 00:22:50.555 00:56:43 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:22:50.555 00:56:43 -- common/autotest_common.sh@904 -- # return 0 00:22:50.555 00:56:43 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:50.555 00:56:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:50.555 00:56:43 -- common/autotest_common.sh@10 -- # set +x 00:22:51.496 [2024-04-27 00:56:44.192999] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:51.496 [2024-04-27 00:56:44.193026] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:51.496 [2024-04-27 00:56:44.193042] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:51.758 [2024-04-27 00:56:44.281099] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:22:51.758 [2024-04-27 00:56:44.346128] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:51.758 [2024-04-27 00:56:44.346169] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:22:51.758 00:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.758 00:56:44 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@638 -- # local es=0 00:22:51.758 00:56:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:51.758 00:56:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.758 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:51.758 request: 00:22:51.758 { 00:22:51.758 "name": "nvme", 00:22:51.758 "trtype": "tcp", 00:22:51.758 "traddr": "10.0.0.2", 00:22:51.758 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:51.758 "adrfam": "ipv4", 00:22:51.758 "trsvcid": "8009", 00:22:51.758 "wait_for_attach": true, 00:22:51.758 "method": "bdev_nvme_start_discovery", 00:22:51.758 "req_id": 1 00:22:51.758 } 00:22:51.758 Got JSON-RPC error response 00:22:51.758 response: 00:22:51.758 { 00:22:51.758 "code": -17, 00:22:51.758 "message": "File exists" 00:22:51.758 } 00:22:51.758 00:56:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:51.758 00:56:44 -- common/autotest_common.sh@641 -- # es=1 00:22:51.758 00:56:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:51.758 00:56:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:51.758 00:56:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:51.758 00:56:44 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:22:51.758 00:56:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:51.758 00:56:44 -- host/discovery.sh@67 -- # xargs 00:22:51.758 00:56:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:51.758 00:56:44 -- host/discovery.sh@67 -- # sort 00:22:51.758 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.758 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:51.758 00:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.758 00:56:44 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:22:51.758 00:56:44 -- host/discovery.sh@146 -- # get_bdev_list 00:22:51.758 00:56:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.758 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.758 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:51.758 00:56:44 -- host/discovery.sh@55 -- # sort 00:22:51.758 00:56:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:51.758 00:56:44 -- host/discovery.sh@55 -- # xargs 00:22:51.758 00:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:51.758 00:56:44 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:51.758 00:56:44 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@638 -- # local es=0 00:22:51.758 00:56:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:51.758 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:51.758 00:56:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:22:51.758 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:51.758 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:52.019 request: 00:22:52.019 { 00:22:52.019 "name": "nvme_second", 00:22:52.019 "trtype": "tcp", 00:22:52.019 "traddr": "10.0.0.2", 00:22:52.019 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:52.019 "adrfam": "ipv4", 00:22:52.019 "trsvcid": "8009", 00:22:52.019 "wait_for_attach": true, 00:22:52.019 "method": "bdev_nvme_start_discovery", 00:22:52.019 "req_id": 1 00:22:52.019 } 00:22:52.019 Got JSON-RPC error response 00:22:52.019 response: 00:22:52.019 { 00:22:52.019 "code": -17, 00:22:52.019 "message": "File exists" 00:22:52.019 } 00:22:52.019 00:56:44 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:52.019 00:56:44 -- common/autotest_common.sh@641 -- # es=1 00:22:52.019 00:56:44 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:52.019 00:56:44 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:52.019 00:56:44 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:52.019 00:56:44 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:22:52.019 00:56:44 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:52.019 00:56:44 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:52.019 00:56:44 -- host/discovery.sh@67 -- # xargs 00:22:52.019 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.019 00:56:44 -- host/discovery.sh@67 -- # sort 00:22:52.019 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:52.019 00:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.019 00:56:44 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:22:52.019 00:56:44 -- host/discovery.sh@152 -- # get_bdev_list 00:22:52.019 00:56:44 -- host/discovery.sh@55 -- # xargs 00:22:52.019 00:56:44 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:52.019 00:56:44 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:22:52.019 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.019 00:56:44 -- host/discovery.sh@55 -- # sort 00:22:52.019 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:52.019 00:56:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:52.019 00:56:44 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:22:52.019 00:56:44 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:52.019 00:56:44 -- common/autotest_common.sh@638 -- # local es=0 00:22:52.019 00:56:44 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:52.019 00:56:44 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:22:52.019 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.019 00:56:44 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:22:52.019 00:56:44 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:52.019 00:56:44 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:22:52.019 00:56:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:52.019 00:56:44 -- common/autotest_common.sh@10 -- # set +x 00:22:52.959 [2024-04-27 00:56:45.558922] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.959 [2024-04-27 00:56:45.559091] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:52.959 [2024-04-27 00:56:45.559106] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010040 with addr=10.0.0.2, port=8010 00:22:52.959 [2024-04-27 00:56:45.559133] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:52.959 [2024-04-27 00:56:45.559144] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:52.959 [2024-04-27 00:56:45.559154] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:53.897 [2024-04-27 00:56:46.558902] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.897 [2024-04-27 00:56:46.559273] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:22:53.897 [2024-04-27 00:56:46.559285] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000010240 with addr=10.0.0.2, port=8010 00:22:53.897 [2024-04-27 00:56:46.559312] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:22:53.897 [2024-04-27 00:56:46.559320] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:22:53.897 [2024-04-27 00:56:46.559329] bdev_nvme.c:6985:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:22:55.273 [2024-04-27 00:56:47.558455] bdev_nvme.c:6966:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:22:55.273 request: 00:22:55.273 { 00:22:55.273 "name": "nvme_second", 00:22:55.273 "trtype": "tcp", 00:22:55.273 "traddr": "10.0.0.2", 00:22:55.273 "hostnqn": "nqn.2021-12.io.spdk:test", 00:22:55.273 "adrfam": "ipv4", 00:22:55.273 "trsvcid": "8010", 00:22:55.273 "attach_timeout_ms": 3000, 00:22:55.273 "method": "bdev_nvme_start_discovery", 00:22:55.273 "req_id": 1 00:22:55.273 } 00:22:55.273 Got JSON-RPC error response 00:22:55.273 response: 00:22:55.273 { 00:22:55.273 "code": -110, 00:22:55.273 "message": "Connection timed out" 00:22:55.273 } 00:22:55.273 00:56:47 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:22:55.273 00:56:47 -- common/autotest_common.sh@641 -- # es=1 00:22:55.273 00:56:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:55.273 00:56:47 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:55.273 00:56:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:55.273 00:56:47 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:22:55.273 00:56:47 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:22:55.273 00:56:47 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:22:55.273 00:56:47 -- host/discovery.sh@67 -- # sort 00:22:55.273 00:56:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:55.273 00:56:47 -- host/discovery.sh@67 -- # xargs 00:22:55.273 00:56:47 -- common/autotest_common.sh@10 -- # set +x 00:22:55.273 00:56:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:55.273 00:56:47 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:22:55.273 00:56:47 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:22:55.273 00:56:47 -- host/discovery.sh@161 -- # kill 2852610 00:22:55.273 00:56:47 -- host/discovery.sh@162 -- # nvmftestfini 00:22:55.274 00:56:47 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:55.274 00:56:47 -- nvmf/common.sh@117 -- # sync 00:22:55.274 00:56:47 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:55.274 00:56:47 -- nvmf/common.sh@120 -- # set +e 00:22:55.274 00:56:47 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:55.274 00:56:47 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:55.274 rmmod nvme_tcp 00:22:55.274 rmmod nvme_fabrics 00:22:55.274 rmmod nvme_keyring 00:22:55.274 00:56:47 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:55.274 00:56:47 -- nvmf/common.sh@124 -- # set -e 00:22:55.274 00:56:47 -- nvmf/common.sh@125 -- # return 0 00:22:55.274 00:56:47 -- nvmf/common.sh@478 -- # '[' -n 2852572 ']' 00:22:55.274 00:56:47 -- nvmf/common.sh@479 -- # killprocess 2852572 00:22:55.274 00:56:47 -- common/autotest_common.sh@936 -- # '[' -z 2852572 ']' 00:22:55.274 00:56:47 -- common/autotest_common.sh@940 -- # kill -0 2852572 00:22:55.274 00:56:47 -- common/autotest_common.sh@941 -- # uname 00:22:55.274 00:56:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:55.274 00:56:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2852572 00:22:55.274 00:56:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:55.274 00:56:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:55.274 00:56:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2852572' 00:22:55.274 killing process with pid 2852572 00:22:55.274 00:56:47 -- common/autotest_common.sh@955 -- # kill 2852572 00:22:55.274 00:56:47 -- common/autotest_common.sh@960 -- # wait 2852572 00:22:55.533 00:56:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:55.533 00:56:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:55.533 00:56:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:55.533 00:56:48 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:55.533 00:56:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:55.533 00:56:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:55.533 00:56:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:55.533 00:56:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.075 00:56:50 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:58.075 00:22:58.075 real 0m18.098s 00:22:58.075 user 0m21.431s 00:22:58.075 sys 0m5.898s 00:22:58.075 00:56:50 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:58.075 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:58.075 ************************************ 00:22:58.075 END TEST nvmf_discovery 00:22:58.075 ************************************ 00:22:58.075 00:56:50 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:58.075 00:56:50 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:58.075 00:56:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:58.075 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:22:58.075 ************************************ 00:22:58.075 START TEST nvmf_discovery_remove_ifc 00:22:58.075 ************************************ 00:22:58.075 00:56:50 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:58.075 * Looking for test storage... 00:22:58.075 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:22:58.075 00:56:50 -- nvmf/common.sh@7 -- # uname -s 00:22:58.075 00:56:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:58.075 00:56:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:58.075 00:56:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:58.075 00:56:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:58.075 00:56:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:58.075 00:56:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:58.075 00:56:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:58.075 00:56:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:58.075 00:56:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:58.075 00:56:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:58.075 00:56:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:22:58.075 00:56:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:22:58.075 00:56:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:58.075 00:56:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:58.075 00:56:50 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:22:58.075 00:56:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:58.075 00:56:50 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:22:58.075 00:56:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:58.075 00:56:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:58.075 00:56:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:58.075 00:56:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.075 00:56:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.075 00:56:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.075 00:56:50 -- paths/export.sh@5 -- # export PATH 00:22:58.075 00:56:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:58.075 00:56:50 -- nvmf/common.sh@47 -- # : 0 00:22:58.075 00:56:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:58.075 00:56:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:58.075 00:56:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:58.075 00:56:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:58.075 00:56:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:58.075 00:56:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:58.075 00:56:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:58.075 00:56:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:58.075 00:56:50 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:58.075 00:56:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:58.075 00:56:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:58.075 00:56:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:58.075 00:56:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:58.075 00:56:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:58.075 00:56:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:58.075 00:56:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:58.075 00:56:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:58.075 00:56:50 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:22:58.075 00:56:50 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:22:58.075 00:56:50 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:58.075 00:56:50 -- common/autotest_common.sh@10 -- # set +x 00:23:03.351 00:56:55 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:03.351 00:56:55 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:03.351 00:56:55 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:03.351 00:56:55 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:03.351 00:56:55 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:03.351 00:56:55 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:03.351 00:56:55 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:03.351 00:56:55 -- nvmf/common.sh@295 -- # net_devs=() 00:23:03.351 00:56:55 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:03.351 00:56:55 -- nvmf/common.sh@296 -- # e810=() 00:23:03.351 00:56:55 -- nvmf/common.sh@296 -- # local -ga e810 00:23:03.351 00:56:55 -- nvmf/common.sh@297 -- # x722=() 00:23:03.351 00:56:55 -- nvmf/common.sh@297 -- # local -ga x722 00:23:03.351 00:56:55 -- nvmf/common.sh@298 -- # mlx=() 00:23:03.351 00:56:55 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:03.351 00:56:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:03.351 00:56:55 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:03.351 00:56:55 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:03.351 00:56:55 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:03.351 00:56:55 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:03.351 00:56:55 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:03.352 00:56:55 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:03.352 00:56:55 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.352 00:56:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:03.352 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:03.352 00:56:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:03.352 00:56:55 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:03.352 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:03.352 00:56:55 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.352 00:56:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.352 00:56:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.352 00:56:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:03.352 Found net devices under 0000:27:00.0: cvl_0_0 00:23:03.352 00:56:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.352 00:56:55 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:03.352 00:56:55 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:03.352 00:56:55 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:03.352 00:56:55 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:03.352 Found net devices under 0000:27:00.1: cvl_0_1 00:23:03.352 00:56:55 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:03.352 00:56:55 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:03.352 00:56:55 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:03.352 00:56:55 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:03.352 00:56:55 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:03.352 00:56:55 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:03.352 00:56:55 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:03.352 00:56:55 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:03.352 00:56:55 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:03.352 00:56:55 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:03.352 00:56:55 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:03.352 00:56:55 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:03.352 00:56:55 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:03.352 00:56:55 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:03.352 00:56:55 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:03.352 00:56:55 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:03.352 00:56:55 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:03.352 00:56:55 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:03.352 00:56:55 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:03.352 00:56:55 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:03.352 00:56:55 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:03.352 00:56:55 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:03.352 00:56:55 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:03.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:03.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.578 ms 00:23:03.352 00:23:03.352 --- 10.0.0.2 ping statistics --- 00:23:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.352 rtt min/avg/max/mdev = 0.578/0.578/0.578/0.000 ms 00:23:03.352 00:56:55 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:03.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:03.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.296 ms 00:23:03.352 00:23:03.352 --- 10.0.0.1 ping statistics --- 00:23:03.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:03.352 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:23:03.352 00:56:55 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:03.352 00:56:55 -- nvmf/common.sh@411 -- # return 0 00:23:03.352 00:56:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:03.352 00:56:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:03.352 00:56:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:03.352 00:56:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:03.352 00:56:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:03.352 00:56:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:03.352 00:56:55 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:23:03.352 00:56:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:03.352 00:56:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:03.352 00:56:55 -- common/autotest_common.sh@10 -- # set +x 00:23:03.352 00:56:55 -- nvmf/common.sh@470 -- # nvmfpid=2858472 00:23:03.352 00:56:55 -- nvmf/common.sh@471 -- # waitforlisten 2858472 00:23:03.352 00:56:55 -- common/autotest_common.sh@817 -- # '[' -z 2858472 ']' 00:23:03.352 00:56:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:03.352 00:56:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:03.352 00:56:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:03.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:03.352 00:56:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:03.352 00:56:55 -- common/autotest_common.sh@10 -- # set +x 00:23:03.352 00:56:55 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:03.352 [2024-04-27 00:56:55.961981] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:03.352 [2024-04-27 00:56:55.962081] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:03.352 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.609 [2024-04-27 00:56:56.082448] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.609 [2024-04-27 00:56:56.178586] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:03.609 [2024-04-27 00:56:56.178620] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:03.609 [2024-04-27 00:56:56.178630] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:03.609 [2024-04-27 00:56:56.178639] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:03.609 [2024-04-27 00:56:56.178647] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:03.609 [2024-04-27 00:56:56.178672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.175 00:56:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:04.175 00:56:56 -- common/autotest_common.sh@850 -- # return 0 00:23:04.175 00:56:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:04.175 00:56:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:04.175 00:56:56 -- common/autotest_common.sh@10 -- # set +x 00:23:04.175 00:56:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.175 00:56:56 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:23:04.175 00:56:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:04.175 00:56:56 -- common/autotest_common.sh@10 -- # set +x 00:23:04.175 [2024-04-27 00:56:56.693277] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.175 [2024-04-27 00:56:56.701433] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:04.175 null0 00:23:04.175 [2024-04-27 00:56:56.733348] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:04.175 00:56:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:04.175 00:56:56 -- host/discovery_remove_ifc.sh@59 -- # hostpid=2858756 00:23:04.175 00:56:56 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2858756 /tmp/host.sock 00:23:04.175 00:56:56 -- common/autotest_common.sh@817 -- # '[' -z 2858756 ']' 00:23:04.175 00:56:56 -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:04.175 00:56:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:23:04.175 00:56:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:04.175 00:56:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:04.175 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:04.175 00:56:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:04.175 00:56:56 -- common/autotest_common.sh@10 -- # set +x 00:23:04.175 [2024-04-27 00:56:56.827616] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:23:04.175 [2024-04-27 00:56:56.827718] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2858756 ] 00:23:04.436 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.436 [2024-04-27 00:56:56.940038] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.436 [2024-04-27 00:56:57.029520] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.005 00:56:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:05.005 00:56:57 -- common/autotest_common.sh@850 -- # return 0 00:23:05.005 00:56:57 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:05.005 00:56:57 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:05.005 00:56:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.005 00:56:57 -- common/autotest_common.sh@10 -- # set +x 00:23:05.005 00:56:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.005 00:56:57 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:05.005 00:56:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.005 00:56:57 -- common/autotest_common.sh@10 -- # set +x 00:23:05.005 00:56:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:05.005 00:56:57 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:05.005 00:56:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:05.005 00:56:57 -- common/autotest_common.sh@10 -- # set +x 00:23:06.424 [2024-04-27 00:56:58.728107] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:06.424 [2024-04-27 00:56:58.728139] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:06.424 [2024-04-27 00:56:58.728158] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:06.424 [2024-04-27 00:56:58.857245] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:06.424 [2024-04-27 00:56:58.957218] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:06.424 [2024-04-27 00:56:58.957284] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:06.424 [2024-04-27 00:56:58.957321] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:06.424 [2024-04-27 00:56:58.957340] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:06.424 [2024-04-27 00:56:58.957369] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:06.424 00:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.424 00:56:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.424 00:56:58 -- common/autotest_common.sh@10 -- # set +x 00:23:06.424 [2024-04-27 00:56:58.965480] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x614000006840 was disconnected and freed. delete nvme_qpair. 00:23:06.424 00:56:58 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.424 00:56:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:06.424 00:56:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:06.424 00:56:59 -- common/autotest_common.sh@10 -- # set +x 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:06.424 00:56:59 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:06.710 00:56:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:06.710 00:56:59 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:06.710 00:56:59 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:07.645 00:57:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:07.645 00:57:00 -- common/autotest_common.sh@10 -- # set +x 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:07.645 00:57:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:07.645 00:57:00 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:08.585 00:57:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:08.585 00:57:01 -- common/autotest_common.sh@10 -- # set +x 00:23:08.585 00:57:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:08.585 00:57:01 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:09.962 00:57:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:09.962 00:57:02 -- common/autotest_common.sh@10 -- # set +x 00:23:09.962 00:57:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:09.962 00:57:02 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:10.900 00:57:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:10.900 00:57:03 -- common/autotest_common.sh@10 -- # set +x 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:10.900 00:57:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:10.900 00:57:03 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:11.833 00:57:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:11.833 00:57:04 -- common/autotest_common.sh@10 -- # set +x 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:11.833 00:57:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:11.833 00:57:04 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:11.833 [2024-04-27 00:57:04.385348] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:23:11.833 [2024-04-27 00:57:04.385418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.833 [2024-04-27 00:57:04.385433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.833 [2024-04-27 00:57:04.385447] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.833 [2024-04-27 00:57:04.385460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.833 [2024-04-27 00:57:04.385469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.833 [2024-04-27 00:57:04.385477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.833 [2024-04-27 00:57:04.385485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.833 [2024-04-27 00:57:04.385492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.833 [2024-04-27 00:57:04.385502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:11.833 [2024-04-27 00:57:04.385509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:11.833 [2024-04-27 00:57:04.385518] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:23:11.833 [2024-04-27 00:57:04.395341] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:23:11.833 [2024-04-27 00:57:04.405356] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:12.772 00:57:05 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:12.772 00:57:05 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:12.772 00:57:05 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:12.772 00:57:05 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:12.772 00:57:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:12.772 00:57:05 -- common/autotest_common.sh@10 -- # set +x 00:23:12.772 00:57:05 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:12.772 [2024-04-27 00:57:05.439251] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:14.148 [2024-04-27 00:57:06.462259] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:23:14.148 [2024-04-27 00:57:06.462338] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x614000005640 with addr=10.0.0.2, port=4420 00:23:14.148 [2024-04-27 00:57:06.462370] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x614000005640 is same with the state(5) to be set 00:23:14.148 [2024-04-27 00:57:06.463028] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005640 (9): Bad file descriptor 00:23:14.148 [2024-04-27 00:57:06.463069] bdev_nvme.c:2052:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:14.148 [2024-04-27 00:57:06.463114] bdev_nvme.c:6674:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:23:14.148 [2024-04-27 00:57:06.463160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.148 [2024-04-27 00:57:06.463184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.148 [2024-04-27 00:57:06.463206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.148 [2024-04-27 00:57:06.463240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.148 [2024-04-27 00:57:06.463261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.149 [2024-04-27 00:57:06.463275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.149 [2024-04-27 00:57:06.463290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.149 [2024-04-27 00:57:06.463305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.149 [2024-04-27 00:57:06.463321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:23:14.149 [2024-04-27 00:57:06.463335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:14.149 [2024-04-27 00:57:06.463350] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:23:14.149 [2024-04-27 00:57:06.463461] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000005240 (9): Bad file descriptor 00:23:14.149 [2024-04-27 00:57:06.464450] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:23:14.149 [2024-04-27 00:57:06.464469] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:23:14.149 00:57:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:14.149 00:57:06 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:23:14.149 00:57:06 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.086 00:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.086 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:15.086 00:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:15.086 00:57:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:15.086 00:57:07 -- common/autotest_common.sh@10 -- # set +x 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:15.086 00:57:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:15.086 00:57:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.025 [2024-04-27 00:57:08.509065] bdev_nvme.c:6923:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:16.025 [2024-04-27 00:57:08.509093] bdev_nvme.c:7003:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:16.025 [2024-04-27 00:57:08.509117] bdev_nvme.c:6886:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:16.025 00:57:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:16.025 00:57:08 -- common/autotest_common.sh@10 -- # set +x 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:16.025 [2024-04-27 00:57:08.639203] bdev_nvme.c:6852:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:23:16.025 00:57:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:23:16.025 00:57:08 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:23:16.285 [2024-04-27 00:57:08.740320] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:16.285 [2024-04-27 00:57:08.740376] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:16.285 [2024-04-27 00:57:08.740409] bdev_nvme.c:7713:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:16.285 [2024-04-27 00:57:08.740430] bdev_nvme.c:6742:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:23:16.285 [2024-04-27 00:57:08.740442] bdev_nvme.c:6701:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:16.285 [2024-04-27 00:57:08.747163] bdev_nvme.c:1606:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x61400000a040 was disconnected and freed. delete nvme_qpair. 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:23:17.221 00:57:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@29 -- # sort 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:23:17.221 00:57:09 -- common/autotest_common.sh@10 -- # set +x 00:23:17.221 00:57:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:23:17.221 00:57:09 -- host/discovery_remove_ifc.sh@90 -- # killprocess 2858756 00:23:17.221 00:57:09 -- common/autotest_common.sh@936 -- # '[' -z 2858756 ']' 00:23:17.221 00:57:09 -- common/autotest_common.sh@940 -- # kill -0 2858756 00:23:17.221 00:57:09 -- common/autotest_common.sh@941 -- # uname 00:23:17.221 00:57:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.221 00:57:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2858756 00:23:17.221 00:57:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:17.221 00:57:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:17.221 00:57:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2858756' 00:23:17.221 killing process with pid 2858756 00:23:17.221 00:57:09 -- common/autotest_common.sh@955 -- # kill 2858756 00:23:17.221 00:57:09 -- common/autotest_common.sh@960 -- # wait 2858756 00:23:17.479 00:57:10 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:23:17.479 00:57:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:17.479 00:57:10 -- nvmf/common.sh@117 -- # sync 00:23:17.479 00:57:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:17.479 00:57:10 -- nvmf/common.sh@120 -- # set +e 00:23:17.479 00:57:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:17.479 00:57:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:17.479 rmmod nvme_tcp 00:23:17.479 rmmod nvme_fabrics 00:23:17.738 rmmod nvme_keyring 00:23:17.738 00:57:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:17.738 00:57:10 -- nvmf/common.sh@124 -- # set -e 00:23:17.738 00:57:10 -- nvmf/common.sh@125 -- # return 0 00:23:17.738 00:57:10 -- nvmf/common.sh@478 -- # '[' -n 2858472 ']' 00:23:17.738 00:57:10 -- nvmf/common.sh@479 -- # killprocess 2858472 00:23:17.738 00:57:10 -- common/autotest_common.sh@936 -- # '[' -z 2858472 ']' 00:23:17.738 00:57:10 -- common/autotest_common.sh@940 -- # kill -0 2858472 00:23:17.738 00:57:10 -- common/autotest_common.sh@941 -- # uname 00:23:17.738 00:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:17.738 00:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2858472 00:23:17.738 00:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:17.738 00:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:17.738 00:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2858472' 00:23:17.738 killing process with pid 2858472 00:23:17.738 00:57:10 -- common/autotest_common.sh@955 -- # kill 2858472 00:23:17.738 00:57:10 -- common/autotest_common.sh@960 -- # wait 2858472 00:23:17.999 00:57:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:17.999 00:57:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:17.999 00:57:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:17.999 00:57:10 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:17.999 00:57:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:17.999 00:57:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.999 00:57:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.999 00:57:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.539 00:57:12 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:20.539 00:23:20.539 real 0m22.402s 00:23:20.539 user 0m27.822s 00:23:20.539 sys 0m5.266s 00:23:20.539 00:57:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:20.539 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:20.539 ************************************ 00:23:20.539 END TEST nvmf_discovery_remove_ifc 00:23:20.539 ************************************ 00:23:20.539 00:57:12 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:20.539 00:57:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:20.539 00:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:20.539 00:57:12 -- common/autotest_common.sh@10 -- # set +x 00:23:20.539 ************************************ 00:23:20.539 START TEST nvmf_identify_kernel_target 00:23:20.539 ************************************ 00:23:20.539 00:57:12 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:23:20.539 * Looking for test storage... 00:23:20.539 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:20.539 00:57:12 -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:20.539 00:57:12 -- nvmf/common.sh@7 -- # uname -s 00:23:20.539 00:57:12 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:20.539 00:57:12 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:20.539 00:57:12 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:20.539 00:57:12 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:20.539 00:57:12 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:20.539 00:57:12 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:20.539 00:57:12 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:20.539 00:57:12 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:20.539 00:57:12 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:20.539 00:57:12 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:20.539 00:57:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:23:20.539 00:57:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:23:20.539 00:57:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:20.539 00:57:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:20.539 00:57:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:20.539 00:57:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:20.539 00:57:13 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:20.539 00:57:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:20.539 00:57:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:20.539 00:57:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:20.539 00:57:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.539 00:57:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.539 00:57:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.539 00:57:13 -- paths/export.sh@5 -- # export PATH 00:23:20.539 00:57:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:20.539 00:57:13 -- nvmf/common.sh@47 -- # : 0 00:23:20.539 00:57:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:20.539 00:57:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:20.539 00:57:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:20.540 00:57:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:20.540 00:57:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:20.540 00:57:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:20.540 00:57:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:20.540 00:57:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:20.540 00:57:13 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:23:20.540 00:57:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:20.540 00:57:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:20.540 00:57:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:20.540 00:57:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:20.540 00:57:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:20.540 00:57:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:20.540 00:57:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:20.540 00:57:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:20.540 00:57:13 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:20.540 00:57:13 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:20.540 00:57:13 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:20.540 00:57:13 -- common/autotest_common.sh@10 -- # set +x 00:23:25.830 00:57:18 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:25.830 00:57:18 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:25.830 00:57:18 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:25.830 00:57:18 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:25.830 00:57:18 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:25.830 00:57:18 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:25.830 00:57:18 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:25.830 00:57:18 -- nvmf/common.sh@295 -- # net_devs=() 00:23:25.830 00:57:18 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:25.830 00:57:18 -- nvmf/common.sh@296 -- # e810=() 00:23:25.830 00:57:18 -- nvmf/common.sh@296 -- # local -ga e810 00:23:25.830 00:57:18 -- nvmf/common.sh@297 -- # x722=() 00:23:25.830 00:57:18 -- nvmf/common.sh@297 -- # local -ga x722 00:23:25.830 00:57:18 -- nvmf/common.sh@298 -- # mlx=() 00:23:25.830 00:57:18 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:25.830 00:57:18 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:25.830 00:57:18 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:25.830 00:57:18 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.830 00:57:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:25.830 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:25.830 00:57:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:25.830 00:57:18 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:25.830 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:25.830 00:57:18 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.830 00:57:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.830 00:57:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.830 00:57:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:25.830 Found net devices under 0000:27:00.0: cvl_0_0 00:23:25.830 00:57:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.830 00:57:18 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:25.830 00:57:18 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:25.830 00:57:18 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:25.830 00:57:18 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:25.830 Found net devices under 0000:27:00.1: cvl_0_1 00:23:25.830 00:57:18 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:25.830 00:57:18 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:25.830 00:57:18 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:25.830 00:57:18 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:25.830 00:57:18 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:25.830 00:57:18 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:25.830 00:57:18 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:25.830 00:57:18 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:25.830 00:57:18 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:25.830 00:57:18 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:25.830 00:57:18 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:25.830 00:57:18 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:25.830 00:57:18 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:25.830 00:57:18 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:25.830 00:57:18 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:25.830 00:57:18 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:25.830 00:57:18 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:25.830 00:57:18 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:25.830 00:57:18 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:25.830 00:57:18 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:25.830 00:57:18 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:25.830 00:57:18 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:25.830 00:57:18 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:25.830 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:25.830 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.492 ms 00:23:25.830 00:23:25.830 --- 10.0.0.2 ping statistics --- 00:23:25.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.830 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:23:25.830 00:57:18 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:25.830 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:25.830 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:23:25.830 00:23:25.830 --- 10.0.0.1 ping statistics --- 00:23:25.830 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:25.830 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:23:25.830 00:57:18 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:25.830 00:57:18 -- nvmf/common.sh@411 -- # return 0 00:23:25.830 00:57:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:25.830 00:57:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:25.830 00:57:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:25.830 00:57:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:25.830 00:57:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:25.830 00:57:18 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:23:25.830 00:57:18 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:23:25.830 00:57:18 -- nvmf/common.sh@717 -- # local ip 00:23:25.830 00:57:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:25.830 00:57:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:25.830 00:57:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.830 00:57:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.830 00:57:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:25.830 00:57:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:25.830 00:57:18 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:23:25.830 00:57:18 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:23:25.830 00:57:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:23:25.830 00:57:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:25.830 00:57:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:25.830 00:57:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:25.830 00:57:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:25.830 00:57:18 -- nvmf/common.sh@628 -- # local block nvme 00:23:25.830 00:57:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:25.830 00:57:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:25.830 00:57:18 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:23:29.126 Waiting for block devices as requested 00:23:29.126 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:23:29.126 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.126 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.126 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:23:29.126 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.126 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.126 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.126 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.126 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.126 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.384 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.384 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.384 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.384 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.642 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:23:29.642 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.642 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.642 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:29.902 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:23:29.902 00:57:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:29.902 00:57:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:29.902 00:57:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:29.902 00:57:22 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:29.902 00:57:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:29.902 00:57:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:29.902 00:57:22 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:29.902 00:57:22 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:29.902 00:57:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:29.902 No valid GPT data, bailing 00:23:29.902 00:57:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:29.902 00:57:22 -- scripts/common.sh@391 -- # pt= 00:23:29.902 00:57:22 -- scripts/common.sh@392 -- # return 1 00:23:29.902 00:57:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:29.902 00:57:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:29.902 00:57:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:29.902 00:57:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:23:29.902 00:57:22 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:29.902 00:57:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:29.902 00:57:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:29.902 00:57:22 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:23:29.902 00:57:22 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:29.902 00:57:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:23:30.163 No valid GPT data, bailing 00:23:30.163 00:57:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:30.163 00:57:22 -- scripts/common.sh@391 -- # pt= 00:23:30.163 00:57:22 -- scripts/common.sh@392 -- # return 1 00:23:30.163 00:57:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:23:30.163 00:57:22 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:30.163 00:57:22 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:23:30.163 00:57:22 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:23:30.163 00:57:22 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:23:30.163 00:57:22 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:23:30.163 00:57:22 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:30.163 00:57:22 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:23:30.163 00:57:22 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:23:30.163 00:57:22 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:23:30.163 No valid GPT data, bailing 00:23:30.163 00:57:22 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:23:30.163 00:57:22 -- scripts/common.sh@391 -- # pt= 00:23:30.163 00:57:22 -- scripts/common.sh@392 -- # return 1 00:23:30.163 00:57:22 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:23:30.163 00:57:22 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:23:30.163 00:57:22 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:30.163 00:57:22 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:30.163 00:57:22 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:30.163 00:57:22 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:23:30.163 00:57:22 -- nvmf/common.sh@656 -- # echo 1 00:23:30.163 00:57:22 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:23:30.163 00:57:22 -- nvmf/common.sh@658 -- # echo 1 00:23:30.163 00:57:22 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:30.163 00:57:22 -- nvmf/common.sh@661 -- # echo tcp 00:23:30.163 00:57:22 -- nvmf/common.sh@662 -- # echo 4420 00:23:30.163 00:57:22 -- nvmf/common.sh@663 -- # echo ipv4 00:23:30.163 00:57:22 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:30.163 00:57:22 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.1 -t tcp -s 4420 00:23:30.163 00:23:30.163 Discovery Log Number of Records 2, Generation counter 2 00:23:30.163 =====Discovery Log Entry 0====== 00:23:30.163 trtype: tcp 00:23:30.163 adrfam: ipv4 00:23:30.163 subtype: current discovery subsystem 00:23:30.163 treq: not specified, sq flow control disable supported 00:23:30.163 portid: 1 00:23:30.163 trsvcid: 4420 00:23:30.163 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:30.163 traddr: 10.0.0.1 00:23:30.163 eflags: none 00:23:30.163 sectype: none 00:23:30.163 =====Discovery Log Entry 1====== 00:23:30.163 trtype: tcp 00:23:30.163 adrfam: ipv4 00:23:30.163 subtype: nvme subsystem 00:23:30.163 treq: not specified, sq flow control disable supported 00:23:30.163 portid: 1 00:23:30.163 trsvcid: 4420 00:23:30.163 subnqn: nqn.2016-06.io.spdk:testnqn 00:23:30.163 traddr: 10.0.0.1 00:23:30.163 eflags: none 00:23:30.163 sectype: none 00:23:30.163 00:57:22 -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:23:30.163 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:23:30.163 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.163 ===================================================== 00:23:30.164 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:30.164 ===================================================== 00:23:30.164 Controller Capabilities/Features 00:23:30.164 ================================ 00:23:30.164 Vendor ID: 0000 00:23:30.164 Subsystem Vendor ID: 0000 00:23:30.164 Serial Number: 826c71cee7f6a54f8f37 00:23:30.164 Model Number: Linux 00:23:30.164 Firmware Version: 6.7.0-68 00:23:30.164 Recommended Arb Burst: 0 00:23:30.164 IEEE OUI Identifier: 00 00 00 00:23:30.164 Multi-path I/O 00:23:30.164 May have multiple subsystem ports: No 00:23:30.164 May have multiple controllers: No 00:23:30.164 Associated with SR-IOV VF: No 00:23:30.164 Max Data Transfer Size: Unlimited 00:23:30.164 Max Number of Namespaces: 0 00:23:30.164 Max Number of I/O Queues: 1024 00:23:30.164 NVMe Specification Version (VS): 1.3 00:23:30.164 NVMe Specification Version (Identify): 1.3 00:23:30.164 Maximum Queue Entries: 1024 00:23:30.164 Contiguous Queues Required: No 00:23:30.164 Arbitration Mechanisms Supported 00:23:30.164 Weighted Round Robin: Not Supported 00:23:30.164 Vendor Specific: Not Supported 00:23:30.164 Reset Timeout: 7500 ms 00:23:30.164 Doorbell Stride: 4 bytes 00:23:30.164 NVM Subsystem Reset: Not Supported 00:23:30.164 Command Sets Supported 00:23:30.164 NVM Command Set: Supported 00:23:30.164 Boot Partition: Not Supported 00:23:30.164 Memory Page Size Minimum: 4096 bytes 00:23:30.164 Memory Page Size Maximum: 4096 bytes 00:23:30.164 Persistent Memory Region: Not Supported 00:23:30.164 Optional Asynchronous Events Supported 00:23:30.164 Namespace Attribute Notices: Not Supported 00:23:30.164 Firmware Activation Notices: Not Supported 00:23:30.164 ANA Change Notices: Not Supported 00:23:30.164 PLE Aggregate Log Change Notices: Not Supported 00:23:30.164 LBA Status Info Alert Notices: Not Supported 00:23:30.164 EGE Aggregate Log Change Notices: Not Supported 00:23:30.164 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.164 Zone Descriptor Change Notices: Not Supported 00:23:30.164 Discovery Log Change Notices: Supported 00:23:30.164 Controller Attributes 00:23:30.164 128-bit Host Identifier: Not Supported 00:23:30.164 Non-Operational Permissive Mode: Not Supported 00:23:30.164 NVM Sets: Not Supported 00:23:30.164 Read Recovery Levels: Not Supported 00:23:30.164 Endurance Groups: Not Supported 00:23:30.164 Predictable Latency Mode: Not Supported 00:23:30.164 Traffic Based Keep ALive: Not Supported 00:23:30.164 Namespace Granularity: Not Supported 00:23:30.164 SQ Associations: Not Supported 00:23:30.164 UUID List: Not Supported 00:23:30.164 Multi-Domain Subsystem: Not Supported 00:23:30.164 Fixed Capacity Management: Not Supported 00:23:30.164 Variable Capacity Management: Not Supported 00:23:30.164 Delete Endurance Group: Not Supported 00:23:30.164 Delete NVM Set: Not Supported 00:23:30.164 Extended LBA Formats Supported: Not Supported 00:23:30.164 Flexible Data Placement Supported: Not Supported 00:23:30.164 00:23:30.164 Controller Memory Buffer Support 00:23:30.164 ================================ 00:23:30.164 Supported: No 00:23:30.164 00:23:30.164 Persistent Memory Region Support 00:23:30.164 ================================ 00:23:30.164 Supported: No 00:23:30.164 00:23:30.164 Admin Command Set Attributes 00:23:30.164 ============================ 00:23:30.164 Security Send/Receive: Not Supported 00:23:30.164 Format NVM: Not Supported 00:23:30.164 Firmware Activate/Download: Not Supported 00:23:30.164 Namespace Management: Not Supported 00:23:30.164 Device Self-Test: Not Supported 00:23:30.164 Directives: Not Supported 00:23:30.164 NVMe-MI: Not Supported 00:23:30.164 Virtualization Management: Not Supported 00:23:30.164 Doorbell Buffer Config: Not Supported 00:23:30.164 Get LBA Status Capability: Not Supported 00:23:30.164 Command & Feature Lockdown Capability: Not Supported 00:23:30.164 Abort Command Limit: 1 00:23:30.164 Async Event Request Limit: 1 00:23:30.164 Number of Firmware Slots: N/A 00:23:30.164 Firmware Slot 1 Read-Only: N/A 00:23:30.164 Firmware Activation Without Reset: N/A 00:23:30.164 Multiple Update Detection Support: N/A 00:23:30.164 Firmware Update Granularity: No Information Provided 00:23:30.164 Per-Namespace SMART Log: No 00:23:30.164 Asymmetric Namespace Access Log Page: Not Supported 00:23:30.164 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:30.164 Command Effects Log Page: Not Supported 00:23:30.164 Get Log Page Extended Data: Supported 00:23:30.164 Telemetry Log Pages: Not Supported 00:23:30.164 Persistent Event Log Pages: Not Supported 00:23:30.164 Supported Log Pages Log Page: May Support 00:23:30.164 Commands Supported & Effects Log Page: Not Supported 00:23:30.164 Feature Identifiers & Effects Log Page:May Support 00:23:30.164 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.164 Data Area 4 for Telemetry Log: Not Supported 00:23:30.164 Error Log Page Entries Supported: 1 00:23:30.164 Keep Alive: Not Supported 00:23:30.164 00:23:30.164 NVM Command Set Attributes 00:23:30.164 ========================== 00:23:30.164 Submission Queue Entry Size 00:23:30.164 Max: 1 00:23:30.164 Min: 1 00:23:30.164 Completion Queue Entry Size 00:23:30.164 Max: 1 00:23:30.164 Min: 1 00:23:30.164 Number of Namespaces: 0 00:23:30.164 Compare Command: Not Supported 00:23:30.164 Write Uncorrectable Command: Not Supported 00:23:30.164 Dataset Management Command: Not Supported 00:23:30.164 Write Zeroes Command: Not Supported 00:23:30.164 Set Features Save Field: Not Supported 00:23:30.164 Reservations: Not Supported 00:23:30.164 Timestamp: Not Supported 00:23:30.164 Copy: Not Supported 00:23:30.164 Volatile Write Cache: Not Present 00:23:30.164 Atomic Write Unit (Normal): 1 00:23:30.164 Atomic Write Unit (PFail): 1 00:23:30.164 Atomic Compare & Write Unit: 1 00:23:30.164 Fused Compare & Write: Not Supported 00:23:30.164 Scatter-Gather List 00:23:30.164 SGL Command Set: Supported 00:23:30.164 SGL Keyed: Not Supported 00:23:30.164 SGL Bit Bucket Descriptor: Not Supported 00:23:30.164 SGL Metadata Pointer: Not Supported 00:23:30.164 Oversized SGL: Not Supported 00:23:30.164 SGL Metadata Address: Not Supported 00:23:30.164 SGL Offset: Supported 00:23:30.164 Transport SGL Data Block: Not Supported 00:23:30.164 Replay Protected Memory Block: Not Supported 00:23:30.164 00:23:30.164 Firmware Slot Information 00:23:30.164 ========================= 00:23:30.164 Active slot: 0 00:23:30.164 00:23:30.164 00:23:30.164 Error Log 00:23:30.164 ========= 00:23:30.164 00:23:30.164 Active Namespaces 00:23:30.164 ================= 00:23:30.164 Discovery Log Page 00:23:30.164 ================== 00:23:30.164 Generation Counter: 2 00:23:30.164 Number of Records: 2 00:23:30.164 Record Format: 0 00:23:30.164 00:23:30.164 Discovery Log Entry 0 00:23:30.164 ---------------------- 00:23:30.164 Transport Type: 3 (TCP) 00:23:30.164 Address Family: 1 (IPv4) 00:23:30.164 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:30.164 Entry Flags: 00:23:30.164 Duplicate Returned Information: 0 00:23:30.164 Explicit Persistent Connection Support for Discovery: 0 00:23:30.164 Transport Requirements: 00:23:30.164 Secure Channel: Not Specified 00:23:30.164 Port ID: 1 (0x0001) 00:23:30.164 Controller ID: 65535 (0xffff) 00:23:30.164 Admin Max SQ Size: 32 00:23:30.164 Transport Service Identifier: 4420 00:23:30.164 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:30.164 Transport Address: 10.0.0.1 00:23:30.164 Discovery Log Entry 1 00:23:30.164 ---------------------- 00:23:30.164 Transport Type: 3 (TCP) 00:23:30.164 Address Family: 1 (IPv4) 00:23:30.164 Subsystem Type: 2 (NVM Subsystem) 00:23:30.164 Entry Flags: 00:23:30.164 Duplicate Returned Information: 0 00:23:30.164 Explicit Persistent Connection Support for Discovery: 0 00:23:30.164 Transport Requirements: 00:23:30.164 Secure Channel: Not Specified 00:23:30.164 Port ID: 1 (0x0001) 00:23:30.164 Controller ID: 65535 (0xffff) 00:23:30.164 Admin Max SQ Size: 32 00:23:30.164 Transport Service Identifier: 4420 00:23:30.164 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:23:30.164 Transport Address: 10.0.0.1 00:23:30.164 00:57:22 -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:23:30.428 EAL: No free 2048 kB hugepages reported on node 1 00:23:30.428 get_feature(0x01) failed 00:23:30.428 get_feature(0x02) failed 00:23:30.428 get_feature(0x04) failed 00:23:30.428 ===================================================== 00:23:30.428 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:23:30.428 ===================================================== 00:23:30.428 Controller Capabilities/Features 00:23:30.428 ================================ 00:23:30.428 Vendor ID: 0000 00:23:30.428 Subsystem Vendor ID: 0000 00:23:30.428 Serial Number: c645c5fbbcd57a2f08ff 00:23:30.428 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:23:30.428 Firmware Version: 6.7.0-68 00:23:30.428 Recommended Arb Burst: 6 00:23:30.428 IEEE OUI Identifier: 00 00 00 00:23:30.429 Multi-path I/O 00:23:30.429 May have multiple subsystem ports: Yes 00:23:30.429 May have multiple controllers: Yes 00:23:30.429 Associated with SR-IOV VF: No 00:23:30.429 Max Data Transfer Size: Unlimited 00:23:30.429 Max Number of Namespaces: 1024 00:23:30.429 Max Number of I/O Queues: 128 00:23:30.429 NVMe Specification Version (VS): 1.3 00:23:30.429 NVMe Specification Version (Identify): 1.3 00:23:30.429 Maximum Queue Entries: 1024 00:23:30.429 Contiguous Queues Required: No 00:23:30.429 Arbitration Mechanisms Supported 00:23:30.429 Weighted Round Robin: Not Supported 00:23:30.429 Vendor Specific: Not Supported 00:23:30.429 Reset Timeout: 7500 ms 00:23:30.429 Doorbell Stride: 4 bytes 00:23:30.429 NVM Subsystem Reset: Not Supported 00:23:30.429 Command Sets Supported 00:23:30.429 NVM Command Set: Supported 00:23:30.429 Boot Partition: Not Supported 00:23:30.429 Memory Page Size Minimum: 4096 bytes 00:23:30.429 Memory Page Size Maximum: 4096 bytes 00:23:30.429 Persistent Memory Region: Not Supported 00:23:30.429 Optional Asynchronous Events Supported 00:23:30.429 Namespace Attribute Notices: Supported 00:23:30.429 Firmware Activation Notices: Not Supported 00:23:30.429 ANA Change Notices: Supported 00:23:30.429 PLE Aggregate Log Change Notices: Not Supported 00:23:30.429 LBA Status Info Alert Notices: Not Supported 00:23:30.429 EGE Aggregate Log Change Notices: Not Supported 00:23:30.429 Normal NVM Subsystem Shutdown event: Not Supported 00:23:30.429 Zone Descriptor Change Notices: Not Supported 00:23:30.429 Discovery Log Change Notices: Not Supported 00:23:30.429 Controller Attributes 00:23:30.429 128-bit Host Identifier: Supported 00:23:30.429 Non-Operational Permissive Mode: Not Supported 00:23:30.429 NVM Sets: Not Supported 00:23:30.429 Read Recovery Levels: Not Supported 00:23:30.429 Endurance Groups: Not Supported 00:23:30.429 Predictable Latency Mode: Not Supported 00:23:30.429 Traffic Based Keep ALive: Supported 00:23:30.429 Namespace Granularity: Not Supported 00:23:30.429 SQ Associations: Not Supported 00:23:30.429 UUID List: Not Supported 00:23:30.429 Multi-Domain Subsystem: Not Supported 00:23:30.429 Fixed Capacity Management: Not Supported 00:23:30.429 Variable Capacity Management: Not Supported 00:23:30.429 Delete Endurance Group: Not Supported 00:23:30.429 Delete NVM Set: Not Supported 00:23:30.429 Extended LBA Formats Supported: Not Supported 00:23:30.429 Flexible Data Placement Supported: Not Supported 00:23:30.429 00:23:30.429 Controller Memory Buffer Support 00:23:30.429 ================================ 00:23:30.429 Supported: No 00:23:30.429 00:23:30.429 Persistent Memory Region Support 00:23:30.429 ================================ 00:23:30.429 Supported: No 00:23:30.429 00:23:30.429 Admin Command Set Attributes 00:23:30.429 ============================ 00:23:30.429 Security Send/Receive: Not Supported 00:23:30.429 Format NVM: Not Supported 00:23:30.429 Firmware Activate/Download: Not Supported 00:23:30.429 Namespace Management: Not Supported 00:23:30.429 Device Self-Test: Not Supported 00:23:30.429 Directives: Not Supported 00:23:30.429 NVMe-MI: Not Supported 00:23:30.429 Virtualization Management: Not Supported 00:23:30.429 Doorbell Buffer Config: Not Supported 00:23:30.429 Get LBA Status Capability: Not Supported 00:23:30.429 Command & Feature Lockdown Capability: Not Supported 00:23:30.429 Abort Command Limit: 4 00:23:30.429 Async Event Request Limit: 4 00:23:30.429 Number of Firmware Slots: N/A 00:23:30.429 Firmware Slot 1 Read-Only: N/A 00:23:30.429 Firmware Activation Without Reset: N/A 00:23:30.429 Multiple Update Detection Support: N/A 00:23:30.429 Firmware Update Granularity: No Information Provided 00:23:30.429 Per-Namespace SMART Log: Yes 00:23:30.429 Asymmetric Namespace Access Log Page: Supported 00:23:30.429 ANA Transition Time : 10 sec 00:23:30.429 00:23:30.429 Asymmetric Namespace Access Capabilities 00:23:30.429 ANA Optimized State : Supported 00:23:30.429 ANA Non-Optimized State : Supported 00:23:30.429 ANA Inaccessible State : Supported 00:23:30.429 ANA Persistent Loss State : Supported 00:23:30.429 ANA Change State : Supported 00:23:30.429 ANAGRPID is not changed : No 00:23:30.429 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:23:30.429 00:23:30.429 ANA Group Identifier Maximum : 128 00:23:30.429 Number of ANA Group Identifiers : 128 00:23:30.429 Max Number of Allowed Namespaces : 1024 00:23:30.429 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:23:30.429 Command Effects Log Page: Supported 00:23:30.429 Get Log Page Extended Data: Supported 00:23:30.429 Telemetry Log Pages: Not Supported 00:23:30.429 Persistent Event Log Pages: Not Supported 00:23:30.429 Supported Log Pages Log Page: May Support 00:23:30.429 Commands Supported & Effects Log Page: Not Supported 00:23:30.429 Feature Identifiers & Effects Log Page:May Support 00:23:30.429 NVMe-MI Commands & Effects Log Page: May Support 00:23:30.429 Data Area 4 for Telemetry Log: Not Supported 00:23:30.429 Error Log Page Entries Supported: 128 00:23:30.429 Keep Alive: Supported 00:23:30.429 Keep Alive Granularity: 1000 ms 00:23:30.429 00:23:30.429 NVM Command Set Attributes 00:23:30.429 ========================== 00:23:30.429 Submission Queue Entry Size 00:23:30.429 Max: 64 00:23:30.430 Min: 64 00:23:30.430 Completion Queue Entry Size 00:23:30.430 Max: 16 00:23:30.430 Min: 16 00:23:30.430 Number of Namespaces: 1024 00:23:30.430 Compare Command: Not Supported 00:23:30.430 Write Uncorrectable Command: Not Supported 00:23:30.430 Dataset Management Command: Supported 00:23:30.430 Write Zeroes Command: Supported 00:23:30.430 Set Features Save Field: Not Supported 00:23:30.430 Reservations: Not Supported 00:23:30.430 Timestamp: Not Supported 00:23:30.430 Copy: Not Supported 00:23:30.430 Volatile Write Cache: Present 00:23:30.430 Atomic Write Unit (Normal): 1 00:23:30.430 Atomic Write Unit (PFail): 1 00:23:30.430 Atomic Compare & Write Unit: 1 00:23:30.430 Fused Compare & Write: Not Supported 00:23:30.430 Scatter-Gather List 00:23:30.430 SGL Command Set: Supported 00:23:30.430 SGL Keyed: Not Supported 00:23:30.430 SGL Bit Bucket Descriptor: Not Supported 00:23:30.430 SGL Metadata Pointer: Not Supported 00:23:30.430 Oversized SGL: Not Supported 00:23:30.430 SGL Metadata Address: Not Supported 00:23:30.430 SGL Offset: Supported 00:23:30.430 Transport SGL Data Block: Not Supported 00:23:30.430 Replay Protected Memory Block: Not Supported 00:23:30.430 00:23:30.430 Firmware Slot Information 00:23:30.430 ========================= 00:23:30.430 Active slot: 0 00:23:30.430 00:23:30.430 Asymmetric Namespace Access 00:23:30.430 =========================== 00:23:30.430 Change Count : 0 00:23:30.430 Number of ANA Group Descriptors : 1 00:23:30.430 ANA Group Descriptor : 0 00:23:30.430 ANA Group ID : 1 00:23:30.430 Number of NSID Values : 1 00:23:30.430 Change Count : 0 00:23:30.430 ANA State : 1 00:23:30.430 Namespace Identifier : 1 00:23:30.430 00:23:30.430 Commands Supported and Effects 00:23:30.430 ============================== 00:23:30.430 Admin Commands 00:23:30.430 -------------- 00:23:30.430 Get Log Page (02h): Supported 00:23:30.430 Identify (06h): Supported 00:23:30.430 Abort (08h): Supported 00:23:30.430 Set Features (09h): Supported 00:23:30.430 Get Features (0Ah): Supported 00:23:30.430 Asynchronous Event Request (0Ch): Supported 00:23:30.430 Keep Alive (18h): Supported 00:23:30.430 I/O Commands 00:23:30.430 ------------ 00:23:30.430 Flush (00h): Supported 00:23:30.430 Write (01h): Supported LBA-Change 00:23:30.430 Read (02h): Supported 00:23:30.430 Write Zeroes (08h): Supported LBA-Change 00:23:30.430 Dataset Management (09h): Supported 00:23:30.430 00:23:30.430 Error Log 00:23:30.430 ========= 00:23:30.430 Entry: 0 00:23:30.430 Error Count: 0x3 00:23:30.430 Submission Queue Id: 0x0 00:23:30.430 Command Id: 0x5 00:23:30.430 Phase Bit: 0 00:23:30.430 Status Code: 0x2 00:23:30.430 Status Code Type: 0x0 00:23:30.430 Do Not Retry: 1 00:23:30.430 Error Location: 0x28 00:23:30.430 LBA: 0x0 00:23:30.430 Namespace: 0x0 00:23:30.430 Vendor Log Page: 0x0 00:23:30.430 ----------- 00:23:30.430 Entry: 1 00:23:30.430 Error Count: 0x2 00:23:30.430 Submission Queue Id: 0x0 00:23:30.430 Command Id: 0x5 00:23:30.430 Phase Bit: 0 00:23:30.430 Status Code: 0x2 00:23:30.430 Status Code Type: 0x0 00:23:30.430 Do Not Retry: 1 00:23:30.430 Error Location: 0x28 00:23:30.430 LBA: 0x0 00:23:30.430 Namespace: 0x0 00:23:30.430 Vendor Log Page: 0x0 00:23:30.430 ----------- 00:23:30.430 Entry: 2 00:23:30.430 Error Count: 0x1 00:23:30.430 Submission Queue Id: 0x0 00:23:30.430 Command Id: 0x4 00:23:30.430 Phase Bit: 0 00:23:30.430 Status Code: 0x2 00:23:30.430 Status Code Type: 0x0 00:23:30.430 Do Not Retry: 1 00:23:30.430 Error Location: 0x28 00:23:30.430 LBA: 0x0 00:23:30.430 Namespace: 0x0 00:23:30.430 Vendor Log Page: 0x0 00:23:30.430 00:23:30.430 Number of Queues 00:23:30.430 ================ 00:23:30.430 Number of I/O Submission Queues: 128 00:23:30.430 Number of I/O Completion Queues: 128 00:23:30.430 00:23:30.430 ZNS Specific Controller Data 00:23:30.430 ============================ 00:23:30.430 Zone Append Size Limit: 0 00:23:30.430 00:23:30.430 00:23:30.430 Active Namespaces 00:23:30.430 ================= 00:23:30.430 get_feature(0x05) failed 00:23:30.430 Namespace ID:1 00:23:30.431 Command Set Identifier: NVM (00h) 00:23:30.431 Deallocate: Supported 00:23:30.431 Deallocated/Unwritten Error: Not Supported 00:23:30.431 Deallocated Read Value: Unknown 00:23:30.431 Deallocate in Write Zeroes: Not Supported 00:23:30.431 Deallocated Guard Field: 0xFFFF 00:23:30.431 Flush: Supported 00:23:30.431 Reservation: Not Supported 00:23:30.431 Namespace Sharing Capabilities: Multiple Controllers 00:23:30.431 Size (in LBAs): 3907029168 (1863GiB) 00:23:30.431 Capacity (in LBAs): 3907029168 (1863GiB) 00:23:30.431 Utilization (in LBAs): 3907029168 (1863GiB) 00:23:30.431 UUID: 3d4613be-6527-4603-9e1d-d38db6a50c78 00:23:30.431 Thin Provisioning: Not Supported 00:23:30.431 Per-NS Atomic Units: Yes 00:23:30.431 Atomic Boundary Size (Normal): 0 00:23:30.431 Atomic Boundary Size (PFail): 0 00:23:30.431 Atomic Boundary Offset: 0 00:23:30.431 NGUID/EUI64 Never Reused: No 00:23:30.431 ANA group ID: 1 00:23:30.431 Namespace Write Protected: No 00:23:30.431 Number of LBA Formats: 1 00:23:30.431 Current LBA Format: LBA Format #00 00:23:30.431 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:30.431 00:23:30.431 00:57:22 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:23:30.431 00:57:22 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:30.431 00:57:22 -- nvmf/common.sh@117 -- # sync 00:23:30.431 00:57:22 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.431 00:57:22 -- nvmf/common.sh@120 -- # set +e 00:23:30.431 00:57:22 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.431 00:57:22 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.431 rmmod nvme_tcp 00:23:30.431 rmmod nvme_fabrics 00:23:30.431 00:57:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.431 00:57:23 -- nvmf/common.sh@124 -- # set -e 00:23:30.431 00:57:23 -- nvmf/common.sh@125 -- # return 0 00:23:30.431 00:57:23 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:23:30.431 00:57:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:30.431 00:57:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:30.431 00:57:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:30.431 00:57:23 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:30.431 00:57:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:30.431 00:57:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.431 00:57:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:30.431 00:57:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.414 00:57:25 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:32.414 00:57:25 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:32.414 00:57:25 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:32.414 00:57:25 -- nvmf/common.sh@675 -- # echo 0 00:23:32.673 00:57:25 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:32.673 00:57:25 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:32.673 00:57:25 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:32.673 00:57:25 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:32.673 00:57:25 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:23:32.673 00:57:25 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:23:32.673 00:57:25 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:23:35.211 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.211 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.211 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.211 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.211 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.211 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.211 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.470 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.470 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.470 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.470 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.470 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.470 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.470 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:23:35.470 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:23:35.470 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:23:37.375 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:23:37.375 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:23:37.375 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:23:37.633 00:23:37.633 real 0m17.291s 00:23:37.633 user 0m3.706s 00:23:37.633 sys 0m8.063s 00:23:37.633 00:57:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:37.633 00:57:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.633 ************************************ 00:23:37.633 END TEST nvmf_identify_kernel_target 00:23:37.633 ************************************ 00:23:37.633 00:57:30 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:37.633 00:57:30 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:37.633 00:57:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:37.633 00:57:30 -- common/autotest_common.sh@10 -- # set +x 00:23:37.633 ************************************ 00:23:37.633 START TEST nvmf_auth 00:23:37.633 ************************************ 00:23:37.633 00:57:30 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:37.892 * Looking for test storage... 00:23:37.892 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:23:37.892 00:57:30 -- host/auth.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:23:37.892 00:57:30 -- nvmf/common.sh@7 -- # uname -s 00:23:37.892 00:57:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:37.892 00:57:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:37.892 00:57:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:37.892 00:57:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:37.892 00:57:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:37.892 00:57:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:37.892 00:57:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:37.892 00:57:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:37.892 00:57:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:37.892 00:57:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:37.892 00:57:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:23:37.892 00:57:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:23:37.892 00:57:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:37.892 00:57:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:37.892 00:57:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:23:37.892 00:57:30 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:37.892 00:57:30 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:23:37.892 00:57:30 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:37.892 00:57:30 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:37.892 00:57:30 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:37.892 00:57:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.892 00:57:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.892 00:57:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.892 00:57:30 -- paths/export.sh@5 -- # export PATH 00:23:37.892 00:57:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:37.892 00:57:30 -- nvmf/common.sh@47 -- # : 0 00:23:37.892 00:57:30 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:37.892 00:57:30 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:37.892 00:57:30 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:37.892 00:57:30 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:37.892 00:57:30 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:37.892 00:57:30 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:37.892 00:57:30 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:37.892 00:57:30 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:37.892 00:57:30 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:37.892 00:57:30 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:37.892 00:57:30 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:37.892 00:57:30 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:37.892 00:57:30 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:37.893 00:57:30 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:37.893 00:57:30 -- host/auth.sh@21 -- # keys=() 00:23:37.893 00:57:30 -- host/auth.sh@77 -- # nvmftestinit 00:23:37.893 00:57:30 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:37.893 00:57:30 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:37.893 00:57:30 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:37.893 00:57:30 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:37.893 00:57:30 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:37.893 00:57:30 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:37.893 00:57:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:37.893 00:57:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:37.893 00:57:30 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:23:37.893 00:57:30 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:23:37.893 00:57:30 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:37.893 00:57:30 -- common/autotest_common.sh@10 -- # set +x 00:23:43.167 00:57:35 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:43.167 00:57:35 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:43.167 00:57:35 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:43.167 00:57:35 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:43.167 00:57:35 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:43.167 00:57:35 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:43.167 00:57:35 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:43.167 00:57:35 -- nvmf/common.sh@295 -- # net_devs=() 00:23:43.167 00:57:35 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:43.167 00:57:35 -- nvmf/common.sh@296 -- # e810=() 00:23:43.167 00:57:35 -- nvmf/common.sh@296 -- # local -ga e810 00:23:43.167 00:57:35 -- nvmf/common.sh@297 -- # x722=() 00:23:43.167 00:57:35 -- nvmf/common.sh@297 -- # local -ga x722 00:23:43.167 00:57:35 -- nvmf/common.sh@298 -- # mlx=() 00:23:43.167 00:57:35 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:43.167 00:57:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:43.167 00:57:35 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:43.167 00:57:35 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:43.167 00:57:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.167 00:57:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:23:43.167 Found 0000:27:00.0 (0x8086 - 0x159b) 00:23:43.167 00:57:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.167 00:57:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:43.168 00:57:35 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:23:43.168 Found 0000:27:00.1 (0x8086 - 0x159b) 00:23:43.168 00:57:35 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:43.168 00:57:35 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.168 00:57:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.168 00:57:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:43.168 00:57:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.168 00:57:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:23:43.168 Found net devices under 0000:27:00.0: cvl_0_0 00:23:43.168 00:57:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.168 00:57:35 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:43.168 00:57:35 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:43.168 00:57:35 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:23:43.168 00:57:35 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:43.168 00:57:35 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:23:43.168 Found net devices under 0000:27:00.1: cvl_0_1 00:23:43.168 00:57:35 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:23:43.168 00:57:35 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:23:43.168 00:57:35 -- nvmf/common.sh@403 -- # is_hw=yes 00:23:43.168 00:57:35 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:23:43.168 00:57:35 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:43.168 00:57:35 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:43.168 00:57:35 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:43.168 00:57:35 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:43.168 00:57:35 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:43.168 00:57:35 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:43.168 00:57:35 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:43.168 00:57:35 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:43.168 00:57:35 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:43.168 00:57:35 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:43.168 00:57:35 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:43.168 00:57:35 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:43.168 00:57:35 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:43.168 00:57:35 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:43.168 00:57:35 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:43.168 00:57:35 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:43.168 00:57:35 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:43.168 00:57:35 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:43.168 00:57:35 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:43.168 00:57:35 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:43.168 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:43.168 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:23:43.168 00:23:43.168 --- 10.0.0.2 ping statistics --- 00:23:43.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.168 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:23:43.168 00:57:35 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:43.168 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:43.168 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.310 ms 00:23:43.168 00:23:43.168 --- 10.0.0.1 ping statistics --- 00:23:43.168 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:43.168 rtt min/avg/max/mdev = 0.310/0.310/0.310/0.000 ms 00:23:43.168 00:57:35 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:43.168 00:57:35 -- nvmf/common.sh@411 -- # return 0 00:23:43.168 00:57:35 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:43.168 00:57:35 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:43.168 00:57:35 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:43.168 00:57:35 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:43.168 00:57:35 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:43.168 00:57:35 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:43.168 00:57:35 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:23:43.168 00:57:35 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:43.168 00:57:35 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:43.168 00:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:43.168 00:57:35 -- nvmf/common.sh@470 -- # nvmfpid=2873413 00:23:43.168 00:57:35 -- nvmf/common.sh@471 -- # waitforlisten 2873413 00:23:43.168 00:57:35 -- common/autotest_common.sh@817 -- # '[' -z 2873413 ']' 00:23:43.168 00:57:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:43.168 00:57:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:43.168 00:57:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:43.168 00:57:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:43.168 00:57:35 -- common/autotest_common.sh@10 -- # set +x 00:23:43.168 00:57:35 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:44.104 00:57:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:44.104 00:57:36 -- common/autotest_common.sh@850 -- # return 0 00:23:44.104 00:57:36 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:44.104 00:57:36 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:44.104 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.104 00:57:36 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:44.104 00:57:36 -- host/auth.sh@79 -- # trap 'cat /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:44.104 00:57:36 -- host/auth.sh@81 -- # gen_key null 32 00:23:44.104 00:57:36 -- host/auth.sh@53 -- # local digest len file key 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # local -A digests 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # digest=null 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # len=32 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # key=7e7fdaea9dabe6a93a3015266b83fa25 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.TU3 00:23:44.104 00:57:36 -- host/auth.sh@59 -- # format_dhchap_key 7e7fdaea9dabe6a93a3015266b83fa25 0 00:23:44.104 00:57:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 7e7fdaea9dabe6a93a3015266b83fa25 0 00:23:44.104 00:57:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # key=7e7fdaea9dabe6a93a3015266b83fa25 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # digest=0 00:23:44.104 00:57:36 -- nvmf/common.sh@694 -- # python - 00:23:44.104 00:57:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.TU3 00:23:44.104 00:57:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.TU3 00:23:44.104 00:57:36 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.TU3 00:23:44.104 00:57:36 -- host/auth.sh@82 -- # gen_key null 48 00:23:44.104 00:57:36 -- host/auth.sh@53 -- # local digest len file key 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # local -A digests 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # digest=null 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # len=48 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # key=e2d4b25099b0af15fa1d5891e722465aaac0cb1060a6a672 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.cOz 00:23:44.104 00:57:36 -- host/auth.sh@59 -- # format_dhchap_key e2d4b25099b0af15fa1d5891e722465aaac0cb1060a6a672 0 00:23:44.104 00:57:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 e2d4b25099b0af15fa1d5891e722465aaac0cb1060a6a672 0 00:23:44.104 00:57:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # key=e2d4b25099b0af15fa1d5891e722465aaac0cb1060a6a672 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # digest=0 00:23:44.104 00:57:36 -- nvmf/common.sh@694 -- # python - 00:23:44.104 00:57:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.cOz 00:23:44.104 00:57:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.cOz 00:23:44.104 00:57:36 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.cOz 00:23:44.104 00:57:36 -- host/auth.sh@83 -- # gen_key sha256 32 00:23:44.104 00:57:36 -- host/auth.sh@53 -- # local digest len file key 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # local -A digests 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # digest=sha256 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # len=32 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # key=cf267f1eda2e0e4de8ded901568552c4 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.98a 00:23:44.104 00:57:36 -- host/auth.sh@59 -- # format_dhchap_key cf267f1eda2e0e4de8ded901568552c4 1 00:23:44.104 00:57:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 cf267f1eda2e0e4de8ded901568552c4 1 00:23:44.104 00:57:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # key=cf267f1eda2e0e4de8ded901568552c4 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # digest=1 00:23:44.104 00:57:36 -- nvmf/common.sh@694 -- # python - 00:23:44.104 00:57:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.98a 00:23:44.104 00:57:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.98a 00:23:44.104 00:57:36 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.98a 00:23:44.104 00:57:36 -- host/auth.sh@84 -- # gen_key sha384 48 00:23:44.104 00:57:36 -- host/auth.sh@53 -- # local digest len file key 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.104 00:57:36 -- host/auth.sh@54 -- # local -A digests 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # digest=sha384 00:23:44.104 00:57:36 -- host/auth.sh@56 -- # len=48 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:44.104 00:57:36 -- host/auth.sh@57 -- # key=1abc1ab67ffdfeb2ecf1ccc5d91027847ccd398f0a19fa0f 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:23:44.104 00:57:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.gEk 00:23:44.104 00:57:36 -- host/auth.sh@59 -- # format_dhchap_key 1abc1ab67ffdfeb2ecf1ccc5d91027847ccd398f0a19fa0f 2 00:23:44.104 00:57:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 1abc1ab67ffdfeb2ecf1ccc5d91027847ccd398f0a19fa0f 2 00:23:44.104 00:57:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # key=1abc1ab67ffdfeb2ecf1ccc5d91027847ccd398f0a19fa0f 00:23:44.104 00:57:36 -- nvmf/common.sh@693 -- # digest=2 00:23:44.104 00:57:36 -- nvmf/common.sh@694 -- # python - 00:23:44.365 00:57:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.gEk 00:23:44.365 00:57:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.gEk 00:23:44.365 00:57:36 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.gEk 00:23:44.365 00:57:36 -- host/auth.sh@85 -- # gen_key sha512 64 00:23:44.365 00:57:36 -- host/auth.sh@53 -- # local digest len file key 00:23:44.365 00:57:36 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:44.365 00:57:36 -- host/auth.sh@54 -- # local -A digests 00:23:44.365 00:57:36 -- host/auth.sh@56 -- # digest=sha512 00:23:44.365 00:57:36 -- host/auth.sh@56 -- # len=64 00:23:44.365 00:57:36 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:44.365 00:57:36 -- host/auth.sh@57 -- # key=b2171a652ecda29293d4e9bf51b3d4c4dcce3bff132e6498ad544c78f2319542 00:23:44.365 00:57:36 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:23:44.365 00:57:36 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.6c5 00:23:44.365 00:57:36 -- host/auth.sh@59 -- # format_dhchap_key b2171a652ecda29293d4e9bf51b3d4c4dcce3bff132e6498ad544c78f2319542 3 00:23:44.365 00:57:36 -- nvmf/common.sh@708 -- # format_key DHHC-1 b2171a652ecda29293d4e9bf51b3d4c4dcce3bff132e6498ad544c78f2319542 3 00:23:44.365 00:57:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:23:44.365 00:57:36 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:23:44.365 00:57:36 -- nvmf/common.sh@693 -- # key=b2171a652ecda29293d4e9bf51b3d4c4dcce3bff132e6498ad544c78f2319542 00:23:44.365 00:57:36 -- nvmf/common.sh@693 -- # digest=3 00:23:44.365 00:57:36 -- nvmf/common.sh@694 -- # python - 00:23:44.365 00:57:36 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.6c5 00:23:44.365 00:57:36 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.6c5 00:23:44.365 00:57:36 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.6c5 00:23:44.365 00:57:36 -- host/auth.sh@87 -- # waitforlisten 2873413 00:23:44.365 00:57:36 -- common/autotest_common.sh@817 -- # '[' -z 2873413 ']' 00:23:44.365 00:57:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.365 00:57:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:44.365 00:57:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.365 00:57:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:44.365 00:57:36 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:44.365 00:57:37 -- common/autotest_common.sh@850 -- # return 0 00:23:44.365 00:57:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:44.365 00:57:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.TU3 00:23:44.365 00:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.365 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.365 00:57:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:44.365 00:57:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.cOz 00:23:44.365 00:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.365 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.365 00:57:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:44.365 00:57:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.98a 00:23:44.365 00:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.365 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.365 00:57:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:44.365 00:57:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.gEk 00:23:44.365 00:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.365 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.365 00:57:37 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:23:44.365 00:57:37 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.6c5 00:23:44.365 00:57:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:44.365 00:57:37 -- common/autotest_common.sh@10 -- # set +x 00:23:44.365 00:57:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:44.365 00:57:37 -- host/auth.sh@92 -- # nvmet_auth_init 00:23:44.365 00:57:37 -- host/auth.sh@35 -- # get_main_ns_ip 00:23:44.365 00:57:37 -- nvmf/common.sh@717 -- # local ip 00:23:44.365 00:57:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:44.365 00:57:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:44.365 00:57:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.365 00:57:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.365 00:57:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:44.365 00:57:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.365 00:57:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:44.365 00:57:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:44.365 00:57:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:44.365 00:57:37 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:44.365 00:57:37 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:44.365 00:57:37 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:23:44.365 00:57:37 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:44.365 00:57:37 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:44.365 00:57:37 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:44.365 00:57:37 -- nvmf/common.sh@628 -- # local block nvme 00:23:44.365 00:57:37 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:23:44.365 00:57:37 -- nvmf/common.sh@631 -- # modprobe nvmet 00:23:44.624 00:57:37 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:44.624 00:57:37 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:23:47.155 Waiting for block devices as requested 00:23:47.155 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.155 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.415 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.415 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:23:47.415 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.674 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:23:47.674 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.674 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:23:47.674 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.933 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:23:47.933 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:47.933 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:23:47.933 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:23:48.193 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:23:48.193 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:23:48.193 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:48.193 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:23:48.452 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:23:48.452 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:23:49.019 00:57:41 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:49.019 00:57:41 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:49.019 00:57:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:23:49.019 00:57:41 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:49.019 00:57:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:49.019 No valid GPT data, bailing 00:23:49.019 00:57:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:49.019 00:57:41 -- scripts/common.sh@391 -- # pt= 00:23:49.019 00:57:41 -- scripts/common.sh@392 -- # return 1 00:23:49.019 00:57:41 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:23:49.019 00:57:41 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:49.019 00:57:41 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:23:49.019 00:57:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:23:49.019 00:57:41 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:23:49.019 00:57:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:23:49.019 No valid GPT data, bailing 00:23:49.019 00:57:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:23:49.019 00:57:41 -- scripts/common.sh@391 -- # pt= 00:23:49.019 00:57:41 -- scripts/common.sh@392 -- # return 1 00:23:49.019 00:57:41 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:23:49.019 00:57:41 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:23:49.019 00:57:41 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:23:49.019 00:57:41 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:23:49.019 00:57:41 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:23:49.019 00:57:41 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:23:49.019 00:57:41 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:23:49.019 00:57:41 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:23:49.019 No valid GPT data, bailing 00:23:49.280 00:57:41 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:23:49.280 00:57:41 -- scripts/common.sh@391 -- # pt= 00:23:49.280 00:57:41 -- scripts/common.sh@392 -- # return 1 00:23:49.280 00:57:41 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:23:49.280 00:57:41 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:23:49.280 00:57:41 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:49.280 00:57:41 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:49.280 00:57:41 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:49.280 00:57:41 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:49.280 00:57:41 -- nvmf/common.sh@656 -- # echo 1 00:23:49.280 00:57:41 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:23:49.280 00:57:41 -- nvmf/common.sh@658 -- # echo 1 00:23:49.280 00:57:41 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:23:49.280 00:57:41 -- nvmf/common.sh@661 -- # echo tcp 00:23:49.280 00:57:41 -- nvmf/common.sh@662 -- # echo 4420 00:23:49.280 00:57:41 -- nvmf/common.sh@663 -- # echo ipv4 00:23:49.280 00:57:41 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:49.280 00:57:41 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.1 -t tcp -s 4420 00:23:49.280 00:23:49.280 Discovery Log Number of Records 2, Generation counter 2 00:23:49.280 =====Discovery Log Entry 0====== 00:23:49.280 trtype: tcp 00:23:49.280 adrfam: ipv4 00:23:49.280 subtype: current discovery subsystem 00:23:49.280 treq: not specified, sq flow control disable supported 00:23:49.280 portid: 1 00:23:49.280 trsvcid: 4420 00:23:49.280 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:49.280 traddr: 10.0.0.1 00:23:49.280 eflags: none 00:23:49.280 sectype: none 00:23:49.280 =====Discovery Log Entry 1====== 00:23:49.280 trtype: tcp 00:23:49.280 adrfam: ipv4 00:23:49.280 subtype: nvme subsystem 00:23:49.280 treq: not specified, sq flow control disable supported 00:23:49.280 portid: 1 00:23:49.280 trsvcid: 4420 00:23:49.280 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:49.280 traddr: 10.0.0.1 00:23:49.280 eflags: none 00:23:49.280 sectype: none 00:23:49.280 00:57:41 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:49.280 00:57:41 -- host/auth.sh@37 -- # echo 0 00:23:49.280 00:57:41 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:49.281 00:57:41 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.281 00:57:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # digest=sha256 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # keyid=1 00:23:49.281 00:57:41 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:49.281 00:57:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:49.281 00:57:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.281 00:57:41 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:49.281 00:57:41 -- host/auth.sh@100 -- # IFS=, 00:23:49.281 00:57:41 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:23:49.281 00:57:41 -- host/auth.sh@100 -- # IFS=, 00:23:49.281 00:57:41 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.281 00:57:41 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:49.281 00:57:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # keyid=1 00:23:49.281 00:57:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:49.281 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.281 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 00:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.281 00:57:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.281 00:57:41 -- nvmf/common.sh@717 -- # local ip 00:23:49.281 00:57:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.281 00:57:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.281 00:57:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.281 00:57:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.281 00:57:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.281 00:57:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.281 00:57:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.281 00:57:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.281 00:57:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.281 00:57:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:49.281 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.281 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 nvme0n1 00:23:49.281 00:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.281 00:57:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.281 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.281 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 00:57:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.281 00:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.281 00:57:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.281 00:57:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.281 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.281 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 00:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.281 00:57:41 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:49.281 00:57:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:49.281 00:57:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.281 00:57:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:49.281 00:57:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # digest=sha256 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.281 00:57:41 -- host/auth.sh@44 -- # keyid=0 00:23:49.281 00:57:41 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:49.281 00:57:41 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:49.281 00:57:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.281 00:57:41 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:49.281 00:57:41 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:23:49.281 00:57:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # digest=sha256 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.281 00:57:41 -- host/auth.sh@68 -- # keyid=0 00:23:49.281 00:57:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.281 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.281 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.281 00:57:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.542 00:57:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.542 00:57:41 -- nvmf/common.sh@717 -- # local ip 00:23:49.542 00:57:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.542 00:57:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.542 00:57:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.542 00:57:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.542 00:57:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.542 00:57:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.542 00:57:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.542 00:57:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.542 00:57:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.542 00:57:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:49.542 00:57:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.542 00:57:41 -- common/autotest_common.sh@10 -- # set +x 00:23:49.542 nvme0n1 00:23:49.542 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.542 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.542 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.542 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.542 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.542 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.542 00:57:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.542 00:57:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.542 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.542 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.542 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.542 00:57:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.542 00:57:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:49.542 00:57:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.542 00:57:42 -- host/auth.sh@44 -- # digest=sha256 00:23:49.542 00:57:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.542 00:57:42 -- host/auth.sh@44 -- # keyid=1 00:23:49.542 00:57:42 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:49.542 00:57:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:49.542 00:57:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.542 00:57:42 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:49.542 00:57:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:23:49.542 00:57:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.542 00:57:42 -- host/auth.sh@68 -- # digest=sha256 00:23:49.542 00:57:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.542 00:57:42 -- host/auth.sh@68 -- # keyid=1 00:23:49.542 00:57:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.542 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.542 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.542 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.542 00:57:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.542 00:57:42 -- nvmf/common.sh@717 -- # local ip 00:23:49.542 00:57:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.542 00:57:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.542 00:57:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.542 00:57:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.542 00:57:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.542 00:57:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.542 00:57:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.542 00:57:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.542 00:57:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.542 00:57:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:49.542 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.542 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.801 nvme0n1 00:23:49.801 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.801 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.801 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.801 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.801 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.801 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.801 00:57:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.801 00:57:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.801 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.801 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.801 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.801 00:57:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.802 00:57:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:49.802 00:57:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # digest=sha256 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # keyid=2 00:23:49.802 00:57:42 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:49.802 00:57:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:49.802 00:57:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:49.802 00:57:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:23:49.802 00:57:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # digest=sha256 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # keyid=2 00:23:49.802 00:57:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.802 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.802 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.802 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.802 00:57:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:49.802 00:57:42 -- nvmf/common.sh@717 -- # local ip 00:23:49.802 00:57:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:49.802 00:57:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:49.802 00:57:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:49.802 00:57:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:49.802 00:57:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:49.802 00:57:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:49.802 00:57:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:49.802 00:57:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:49.802 00:57:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:49.802 00:57:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:49.802 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.802 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.802 nvme0n1 00:23:49.802 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.802 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:49.802 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.802 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.802 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:49.802 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.802 00:57:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:49.802 00:57:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:49.802 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.802 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:49.802 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.802 00:57:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:49.802 00:57:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:49.802 00:57:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # digest=sha256 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@44 -- # keyid=3 00:23:49.802 00:57:42 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:49.802 00:57:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:49.802 00:57:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:49.802 00:57:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:23:49.802 00:57:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # digest=sha256 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:49.802 00:57:42 -- host/auth.sh@68 -- # keyid=3 00:23:49.802 00:57:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:49.802 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.802 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.060 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.060 00:57:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.060 00:57:42 -- nvmf/common.sh@717 -- # local ip 00:23:50.060 00:57:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.060 00:57:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.060 00:57:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.060 00:57:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.060 00:57:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.060 00:57:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.060 00:57:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.060 00:57:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.060 00:57:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.060 00:57:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:50.060 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.060 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.060 nvme0n1 00:23:50.060 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.060 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.060 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.060 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.060 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.060 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.060 00:57:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.060 00:57:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.060 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.060 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.060 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.060 00:57:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.060 00:57:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:50.060 00:57:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.060 00:57:42 -- host/auth.sh@44 -- # digest=sha256 00:23:50.060 00:57:42 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:50.060 00:57:42 -- host/auth.sh@44 -- # keyid=4 00:23:50.060 00:57:42 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:50.060 00:57:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:50.060 00:57:42 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:50.060 00:57:42 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:50.060 00:57:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:23:50.060 00:57:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.060 00:57:42 -- host/auth.sh@68 -- # digest=sha256 00:23:50.061 00:57:42 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:50.061 00:57:42 -- host/auth.sh@68 -- # keyid=4 00:23:50.061 00:57:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:50.061 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.061 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.061 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.061 00:57:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.061 00:57:42 -- nvmf/common.sh@717 -- # local ip 00:23:50.061 00:57:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.061 00:57:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.061 00:57:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.061 00:57:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.061 00:57:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.061 00:57:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.061 00:57:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.061 00:57:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.061 00:57:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.061 00:57:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:50.061 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.061 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.319 nvme0n1 00:23:50.319 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.319 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.319 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.319 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.319 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.319 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.319 00:57:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.319 00:57:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.319 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.319 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.319 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.319 00:57:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:50.319 00:57:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.319 00:57:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:50.319 00:57:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.319 00:57:42 -- host/auth.sh@44 -- # digest=sha256 00:23:50.320 00:57:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.320 00:57:42 -- host/auth.sh@44 -- # keyid=0 00:23:50.320 00:57:42 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:50.320 00:57:42 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:50.320 00:57:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.320 00:57:42 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:50.320 00:57:42 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:23:50.320 00:57:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.320 00:57:42 -- host/auth.sh@68 -- # digest=sha256 00:23:50.320 00:57:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.320 00:57:42 -- host/auth.sh@68 -- # keyid=0 00:23:50.320 00:57:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.320 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.320 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.320 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.320 00:57:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.320 00:57:42 -- nvmf/common.sh@717 -- # local ip 00:23:50.320 00:57:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.320 00:57:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.320 00:57:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.320 00:57:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.320 00:57:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.320 00:57:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.320 00:57:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.320 00:57:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.320 00:57:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.320 00:57:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:50.320 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.320 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.320 nvme0n1 00:23:50.320 00:57:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.320 00:57:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.320 00:57:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.320 00:57:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.320 00:57:42 -- common/autotest_common.sh@10 -- # set +x 00:23:50.320 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.578 00:57:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:50.578 00:57:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # digest=sha256 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # keyid=1 00:23:50.578 00:57:43 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:50.578 00:57:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:50.578 00:57:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:50.578 00:57:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:23:50.578 00:57:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # digest=sha256 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # keyid=1 00:23:50.578 00:57:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.578 00:57:43 -- nvmf/common.sh@717 -- # local ip 00:23:50.578 00:57:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.578 00:57:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.578 00:57:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.578 00:57:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 nvme0n1 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 00:57:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.578 00:57:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:50.578 00:57:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # digest=sha256 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@44 -- # keyid=2 00:23:50.578 00:57:43 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:50.578 00:57:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:50.578 00:57:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:50.578 00:57:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:23:50.578 00:57:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # digest=sha256 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.578 00:57:43 -- host/auth.sh@68 -- # keyid=2 00:23:50.578 00:57:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.578 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.578 00:57:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.578 00:57:43 -- nvmf/common.sh@717 -- # local ip 00:23:50.578 00:57:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.578 00:57:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.578 00:57:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.578 00:57:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.578 00:57:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.578 00:57:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:50.578 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.578 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.838 nvme0n1 00:23:50.838 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.838 00:57:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:50.838 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.838 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.838 00:57:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:50.838 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.838 00:57:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:50.838 00:57:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:50.838 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.838 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.838 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.838 00:57:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:50.838 00:57:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:50.838 00:57:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:50.838 00:57:43 -- host/auth.sh@44 -- # digest=sha256 00:23:50.838 00:57:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:50.838 00:57:43 -- host/auth.sh@44 -- # keyid=3 00:23:50.838 00:57:43 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:50.838 00:57:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:50.838 00:57:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:50.838 00:57:43 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:50.838 00:57:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:23:50.838 00:57:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:50.838 00:57:43 -- host/auth.sh@68 -- # digest=sha256 00:23:50.838 00:57:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:50.838 00:57:43 -- host/auth.sh@68 -- # keyid=3 00:23:50.838 00:57:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:50.838 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.838 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:50.838 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:50.838 00:57:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:50.838 00:57:43 -- nvmf/common.sh@717 -- # local ip 00:23:50.838 00:57:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:50.838 00:57:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:50.838 00:57:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:50.838 00:57:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:50.838 00:57:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:50.838 00:57:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:50.838 00:57:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:50.838 00:57:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:50.838 00:57:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:50.838 00:57:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:50.838 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:50.838 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 nvme0n1 00:23:51.099 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.099 00:57:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.099 00:57:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.099 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.099 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.099 00:57:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.099 00:57:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.099 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.099 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.099 00:57:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.099 00:57:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:51.099 00:57:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.099 00:57:43 -- host/auth.sh@44 -- # digest=sha256 00:23:51.099 00:57:43 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:51.099 00:57:43 -- host/auth.sh@44 -- # keyid=4 00:23:51.099 00:57:43 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:51.099 00:57:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:51.099 00:57:43 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:51.099 00:57:43 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:51.099 00:57:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:23:51.099 00:57:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.099 00:57:43 -- host/auth.sh@68 -- # digest=sha256 00:23:51.099 00:57:43 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:51.099 00:57:43 -- host/auth.sh@68 -- # keyid=4 00:23:51.099 00:57:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:51.099 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.099 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.099 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.099 00:57:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.099 00:57:43 -- nvmf/common.sh@717 -- # local ip 00:23:51.099 00:57:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.099 00:57:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.099 00:57:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.099 00:57:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.099 00:57:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.099 00:57:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.099 00:57:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.099 00:57:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.099 00:57:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.099 00:57:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:51.099 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.099 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.360 nvme0n1 00:23:51.360 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.360 00:57:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.360 00:57:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.360 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.360 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.360 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.360 00:57:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.360 00:57:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.360 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.360 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.360 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.360 00:57:43 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:51.360 00:57:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.360 00:57:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:51.360 00:57:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.360 00:57:43 -- host/auth.sh@44 -- # digest=sha256 00:23:51.360 00:57:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.360 00:57:43 -- host/auth.sh@44 -- # keyid=0 00:23:51.360 00:57:43 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:51.360 00:57:43 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:51.360 00:57:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:51.360 00:57:43 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:51.360 00:57:43 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:23:51.360 00:57:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.360 00:57:43 -- host/auth.sh@68 -- # digest=sha256 00:23:51.360 00:57:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:51.360 00:57:43 -- host/auth.sh@68 -- # keyid=0 00:23:51.360 00:57:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.360 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.360 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.360 00:57:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.360 00:57:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.360 00:57:43 -- nvmf/common.sh@717 -- # local ip 00:23:51.360 00:57:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.360 00:57:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.360 00:57:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.360 00:57:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.360 00:57:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.360 00:57:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.360 00:57:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.360 00:57:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.360 00:57:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.360 00:57:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:51.360 00:57:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.360 00:57:43 -- common/autotest_common.sh@10 -- # set +x 00:23:51.620 nvme0n1 00:23:51.620 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.620 00:57:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.620 00:57:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.620 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.620 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.620 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.620 00:57:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.620 00:57:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.620 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.620 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.620 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.620 00:57:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.620 00:57:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:51.620 00:57:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.620 00:57:44 -- host/auth.sh@44 -- # digest=sha256 00:23:51.620 00:57:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.620 00:57:44 -- host/auth.sh@44 -- # keyid=1 00:23:51.620 00:57:44 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:51.620 00:57:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:51.620 00:57:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:51.620 00:57:44 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:51.620 00:57:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:23:51.620 00:57:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.620 00:57:44 -- host/auth.sh@68 -- # digest=sha256 00:23:51.620 00:57:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:51.620 00:57:44 -- host/auth.sh@68 -- # keyid=1 00:23:51.620 00:57:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.620 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.620 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.620 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.620 00:57:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.620 00:57:44 -- nvmf/common.sh@717 -- # local ip 00:23:51.620 00:57:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.620 00:57:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.620 00:57:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.620 00:57:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.620 00:57:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.620 00:57:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.620 00:57:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.620 00:57:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.620 00:57:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.620 00:57:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:51.620 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.620 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 nvme0n1 00:23:51.879 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.879 00:57:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:51.879 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.879 00:57:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:51.879 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.879 00:57:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:51.879 00:57:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:51.879 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.879 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.879 00:57:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:51.879 00:57:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:51.879 00:57:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:51.879 00:57:44 -- host/auth.sh@44 -- # digest=sha256 00:23:51.879 00:57:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:51.879 00:57:44 -- host/auth.sh@44 -- # keyid=2 00:23:51.879 00:57:44 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:51.879 00:57:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:51.879 00:57:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:51.879 00:57:44 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:51.879 00:57:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:23:51.879 00:57:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:51.879 00:57:44 -- host/auth.sh@68 -- # digest=sha256 00:23:51.879 00:57:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:51.879 00:57:44 -- host/auth.sh@68 -- # keyid=2 00:23:51.879 00:57:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:51.879 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.879 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:51.879 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:51.879 00:57:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:51.879 00:57:44 -- nvmf/common.sh@717 -- # local ip 00:23:51.879 00:57:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:51.879 00:57:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:51.879 00:57:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:51.879 00:57:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:51.879 00:57:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:51.879 00:57:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:51.879 00:57:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:51.879 00:57:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:51.879 00:57:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:51.879 00:57:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:51.879 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:51.879 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.137 nvme0n1 00:23:52.137 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.137 00:57:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.137 00:57:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.137 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.137 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.137 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.137 00:57:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.137 00:57:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.137 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.137 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.137 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.137 00:57:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.137 00:57:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:52.137 00:57:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.137 00:57:44 -- host/auth.sh@44 -- # digest=sha256 00:23:52.137 00:57:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.137 00:57:44 -- host/auth.sh@44 -- # keyid=3 00:23:52.137 00:57:44 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:52.137 00:57:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:52.137 00:57:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:52.137 00:57:44 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:52.137 00:57:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:23:52.137 00:57:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.137 00:57:44 -- host/auth.sh@68 -- # digest=sha256 00:23:52.137 00:57:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:52.137 00:57:44 -- host/auth.sh@68 -- # keyid=3 00:23:52.137 00:57:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.137 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.137 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.137 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.137 00:57:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.137 00:57:44 -- nvmf/common.sh@717 -- # local ip 00:23:52.137 00:57:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.137 00:57:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.137 00:57:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.137 00:57:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.137 00:57:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.137 00:57:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.137 00:57:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.137 00:57:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.137 00:57:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.137 00:57:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:52.137 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.137 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.394 nvme0n1 00:23:52.394 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.394 00:57:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.394 00:57:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.394 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.394 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.394 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.394 00:57:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.394 00:57:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.394 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.394 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.394 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.394 00:57:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.394 00:57:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:52.395 00:57:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.395 00:57:44 -- host/auth.sh@44 -- # digest=sha256 00:23:52.395 00:57:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:52.395 00:57:44 -- host/auth.sh@44 -- # keyid=4 00:23:52.395 00:57:44 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:52.395 00:57:44 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:52.395 00:57:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:52.395 00:57:44 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:52.395 00:57:44 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:23:52.395 00:57:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.395 00:57:44 -- host/auth.sh@68 -- # digest=sha256 00:23:52.395 00:57:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:52.395 00:57:44 -- host/auth.sh@68 -- # keyid=4 00:23:52.395 00:57:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:52.395 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.395 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.395 00:57:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.395 00:57:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.395 00:57:44 -- nvmf/common.sh@717 -- # local ip 00:23:52.395 00:57:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.395 00:57:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.395 00:57:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.395 00:57:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.395 00:57:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.395 00:57:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.395 00:57:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.395 00:57:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.395 00:57:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.395 00:57:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:52.395 00:57:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.395 00:57:44 -- common/autotest_common.sh@10 -- # set +x 00:23:52.653 nvme0n1 00:23:52.653 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.653 00:57:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.653 00:57:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.653 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.653 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.653 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.653 00:57:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.653 00:57:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.653 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.653 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.653 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.653 00:57:45 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:52.653 00:57:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.653 00:57:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:52.654 00:57:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.654 00:57:45 -- host/auth.sh@44 -- # digest=sha256 00:23:52.654 00:57:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.654 00:57:45 -- host/auth.sh@44 -- # keyid=0 00:23:52.654 00:57:45 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:52.654 00:57:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:52.654 00:57:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:52.654 00:57:45 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:52.654 00:57:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:23:52.654 00:57:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.654 00:57:45 -- host/auth.sh@68 -- # digest=sha256 00:23:52.654 00:57:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:52.654 00:57:45 -- host/auth.sh@68 -- # keyid=0 00:23:52.654 00:57:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:52.654 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.654 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.654 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.654 00:57:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.654 00:57:45 -- nvmf/common.sh@717 -- # local ip 00:23:52.654 00:57:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.654 00:57:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.654 00:57:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.654 00:57:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.654 00:57:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.654 00:57:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.654 00:57:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.654 00:57:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.654 00:57:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.654 00:57:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:52.654 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.654 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 nvme0n1 00:23:52.913 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 00:57:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:52.913 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 00:57:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:52.913 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 00:57:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:52.913 00:57:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:52.913 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 00:57:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:52.913 00:57:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:52.913 00:57:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:52.913 00:57:45 -- host/auth.sh@44 -- # digest=sha256 00:23:52.913 00:57:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:52.913 00:57:45 -- host/auth.sh@44 -- # keyid=1 00:23:52.913 00:57:45 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:52.913 00:57:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:52.913 00:57:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:52.913 00:57:45 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:52.913 00:57:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:23:52.913 00:57:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:52.913 00:57:45 -- host/auth.sh@68 -- # digest=sha256 00:23:52.913 00:57:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:52.913 00:57:45 -- host/auth.sh@68 -- # keyid=1 00:23:52.913 00:57:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:52.913 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.913 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:52.913 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:52.913 00:57:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:52.913 00:57:45 -- nvmf/common.sh@717 -- # local ip 00:23:52.913 00:57:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:52.913 00:57:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:52.914 00:57:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:52.914 00:57:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:52.914 00:57:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:52.914 00:57:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:52.914 00:57:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:52.914 00:57:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:52.914 00:57:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:52.914 00:57:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:52.914 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:52.914 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:53.481 nvme0n1 00:23:53.481 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.481 00:57:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.481 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.481 00:57:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:53.481 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:53.481 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.481 00:57:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.481 00:57:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.481 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.481 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:53.481 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.481 00:57:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:53.481 00:57:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:53.481 00:57:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:53.481 00:57:45 -- host/auth.sh@44 -- # digest=sha256 00:23:53.481 00:57:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.482 00:57:45 -- host/auth.sh@44 -- # keyid=2 00:23:53.482 00:57:45 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:53.482 00:57:45 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:53.482 00:57:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:53.482 00:57:45 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:53.482 00:57:45 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:23:53.482 00:57:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:53.482 00:57:45 -- host/auth.sh@68 -- # digest=sha256 00:23:53.482 00:57:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:53.482 00:57:45 -- host/auth.sh@68 -- # keyid=2 00:23:53.482 00:57:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.482 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.482 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:53.482 00:57:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.482 00:57:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:53.482 00:57:45 -- nvmf/common.sh@717 -- # local ip 00:23:53.482 00:57:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:53.482 00:57:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:53.482 00:57:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.482 00:57:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.482 00:57:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:53.482 00:57:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.482 00:57:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:53.482 00:57:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:53.482 00:57:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:53.482 00:57:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:53.482 00:57:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.482 00:57:45 -- common/autotest_common.sh@10 -- # set +x 00:23:53.740 nvme0n1 00:23:53.740 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.740 00:57:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.740 00:57:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:53.740 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.740 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.740 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.740 00:57:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:53.740 00:57:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:53.740 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.740 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.740 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.740 00:57:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:53.740 00:57:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:53.740 00:57:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:53.740 00:57:46 -- host/auth.sh@44 -- # digest=sha256 00:23:53.740 00:57:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:53.740 00:57:46 -- host/auth.sh@44 -- # keyid=3 00:23:53.740 00:57:46 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:53.740 00:57:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:53.740 00:57:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:53.740 00:57:46 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:53.740 00:57:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:23:53.740 00:57:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:53.740 00:57:46 -- host/auth.sh@68 -- # digest=sha256 00:23:53.740 00:57:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:53.740 00:57:46 -- host/auth.sh@68 -- # keyid=3 00:23:53.740 00:57:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:53.740 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.740 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.740 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.740 00:57:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:53.740 00:57:46 -- nvmf/common.sh@717 -- # local ip 00:23:53.740 00:57:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:53.740 00:57:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:53.740 00:57:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:53.740 00:57:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:53.740 00:57:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:53.740 00:57:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:53.740 00:57:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:53.740 00:57:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:53.740 00:57:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:53.740 00:57:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:53.740 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.740 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:53.997 nvme0n1 00:23:53.997 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:53.997 00:57:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:53.997 00:57:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:53.997 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:53.997 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:54.255 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.255 00:57:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.255 00:57:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.255 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.255 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:54.255 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.255 00:57:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:54.255 00:57:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:54.255 00:57:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:54.255 00:57:46 -- host/auth.sh@44 -- # digest=sha256 00:23:54.255 00:57:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:54.255 00:57:46 -- host/auth.sh@44 -- # keyid=4 00:23:54.255 00:57:46 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:54.255 00:57:46 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:54.255 00:57:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:23:54.255 00:57:46 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:54.255 00:57:46 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:23:54.255 00:57:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:54.255 00:57:46 -- host/auth.sh@68 -- # digest=sha256 00:23:54.255 00:57:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:23:54.255 00:57:46 -- host/auth.sh@68 -- # keyid=4 00:23:54.255 00:57:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:54.255 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.255 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:54.255 00:57:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.255 00:57:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:54.255 00:57:46 -- nvmf/common.sh@717 -- # local ip 00:23:54.255 00:57:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:54.255 00:57:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:54.255 00:57:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.255 00:57:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.255 00:57:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:54.255 00:57:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.255 00:57:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:54.255 00:57:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:54.255 00:57:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:54.255 00:57:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:54.255 00:57:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.255 00:57:46 -- common/autotest_common.sh@10 -- # set +x 00:23:54.513 nvme0n1 00:23:54.513 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.513 00:57:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:54.513 00:57:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:54.513 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.513 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:54.513 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.513 00:57:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:54.513 00:57:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:54.514 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.514 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:54.514 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.514 00:57:47 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:54.514 00:57:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:54.514 00:57:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:54.514 00:57:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:54.514 00:57:47 -- host/auth.sh@44 -- # digest=sha256 00:23:54.514 00:57:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:54.514 00:57:47 -- host/auth.sh@44 -- # keyid=0 00:23:54.514 00:57:47 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:54.514 00:57:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:54.514 00:57:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:54.514 00:57:47 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:54.514 00:57:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:23:54.514 00:57:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:54.514 00:57:47 -- host/auth.sh@68 -- # digest=sha256 00:23:54.514 00:57:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:54.514 00:57:47 -- host/auth.sh@68 -- # keyid=0 00:23:54.514 00:57:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:54.514 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.514 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:54.514 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:54.514 00:57:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:54.514 00:57:47 -- nvmf/common.sh@717 -- # local ip 00:23:54.514 00:57:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:54.514 00:57:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:54.514 00:57:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:54.514 00:57:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:54.514 00:57:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:54.514 00:57:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:54.514 00:57:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:54.514 00:57:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:54.514 00:57:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:54.514 00:57:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:54.514 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:54.514 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:55.129 nvme0n1 00:23:55.129 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.129 00:57:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.129 00:57:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:55.129 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.129 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:55.129 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.129 00:57:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.129 00:57:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.129 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.129 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:55.129 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.129 00:57:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:55.129 00:57:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:55.129 00:57:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:55.129 00:57:47 -- host/auth.sh@44 -- # digest=sha256 00:23:55.129 00:57:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.129 00:57:47 -- host/auth.sh@44 -- # keyid=1 00:23:55.129 00:57:47 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:55.129 00:57:47 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:55.129 00:57:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:55.129 00:57:47 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:55.129 00:57:47 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:23:55.129 00:57:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:55.129 00:57:47 -- host/auth.sh@68 -- # digest=sha256 00:23:55.129 00:57:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:55.129 00:57:47 -- host/auth.sh@68 -- # keyid=1 00:23:55.129 00:57:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:55.129 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.129 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:55.129 00:57:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.129 00:57:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:55.129 00:57:47 -- nvmf/common.sh@717 -- # local ip 00:23:55.129 00:57:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:55.129 00:57:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:55.129 00:57:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.129 00:57:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.129 00:57:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:55.129 00:57:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.129 00:57:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:55.129 00:57:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:55.129 00:57:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:55.129 00:57:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:55.129 00:57:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.129 00:57:47 -- common/autotest_common.sh@10 -- # set +x 00:23:55.696 nvme0n1 00:23:55.696 00:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.696 00:57:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:55.696 00:57:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:55.696 00:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.696 00:57:48 -- common/autotest_common.sh@10 -- # set +x 00:23:55.696 00:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.953 00:57:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.953 00:57:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:55.953 00:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.953 00:57:48 -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 00:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.953 00:57:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:55.953 00:57:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:55.953 00:57:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:55.953 00:57:48 -- host/auth.sh@44 -- # digest=sha256 00:23:55.953 00:57:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:55.953 00:57:48 -- host/auth.sh@44 -- # keyid=2 00:23:55.953 00:57:48 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:55.953 00:57:48 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:55.953 00:57:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:55.953 00:57:48 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:55.953 00:57:48 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:23:55.953 00:57:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:55.953 00:57:48 -- host/auth.sh@68 -- # digest=sha256 00:23:55.953 00:57:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:55.953 00:57:48 -- host/auth.sh@68 -- # keyid=2 00:23:55.953 00:57:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:55.953 00:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.953 00:57:48 -- common/autotest_common.sh@10 -- # set +x 00:23:55.953 00:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:55.953 00:57:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:55.953 00:57:48 -- nvmf/common.sh@717 -- # local ip 00:23:55.953 00:57:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:55.953 00:57:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:55.953 00:57:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:55.953 00:57:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:55.953 00:57:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:55.954 00:57:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:55.954 00:57:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:55.954 00:57:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:55.954 00:57:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:55.954 00:57:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:55.954 00:57:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:55.954 00:57:48 -- common/autotest_common.sh@10 -- # set +x 00:23:56.564 nvme0n1 00:23:56.564 00:57:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.564 00:57:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:56.564 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.564 00:57:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:56.564 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:56.564 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.564 00:57:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:56.564 00:57:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:56.564 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.564 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:56.564 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.564 00:57:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:56.564 00:57:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:56.564 00:57:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:56.564 00:57:49 -- host/auth.sh@44 -- # digest=sha256 00:23:56.564 00:57:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:56.564 00:57:49 -- host/auth.sh@44 -- # keyid=3 00:23:56.564 00:57:49 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:56.564 00:57:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:56.564 00:57:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:56.564 00:57:49 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:56.564 00:57:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:23:56.564 00:57:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:56.564 00:57:49 -- host/auth.sh@68 -- # digest=sha256 00:23:56.564 00:57:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:56.564 00:57:49 -- host/auth.sh@68 -- # keyid=3 00:23:56.564 00:57:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:56.564 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.565 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:56.565 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:56.565 00:57:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:56.565 00:57:49 -- nvmf/common.sh@717 -- # local ip 00:23:56.565 00:57:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:56.565 00:57:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:56.565 00:57:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:56.565 00:57:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:56.565 00:57:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:56.565 00:57:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:56.565 00:57:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:56.565 00:57:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:56.565 00:57:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:56.565 00:57:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:56.565 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:56.565 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:57.133 nvme0n1 00:23:57.133 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.133 00:57:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.133 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.133 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:57.133 00:57:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:57.133 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.133 00:57:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.133 00:57:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.133 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.133 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:57.133 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.133 00:57:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:57.133 00:57:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:57.134 00:57:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:57.134 00:57:49 -- host/auth.sh@44 -- # digest=sha256 00:23:57.134 00:57:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:57.134 00:57:49 -- host/auth.sh@44 -- # keyid=4 00:23:57.134 00:57:49 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:57.134 00:57:49 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:23:57.134 00:57:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:23:57.134 00:57:49 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:57.134 00:57:49 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:23:57.134 00:57:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:57.134 00:57:49 -- host/auth.sh@68 -- # digest=sha256 00:23:57.134 00:57:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:23:57.134 00:57:49 -- host/auth.sh@68 -- # keyid=4 00:23:57.134 00:57:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:57.134 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.134 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:57.134 00:57:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.134 00:57:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:57.134 00:57:49 -- nvmf/common.sh@717 -- # local ip 00:23:57.134 00:57:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:57.134 00:57:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:57.134 00:57:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.134 00:57:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.134 00:57:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:57.134 00:57:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.134 00:57:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:57.134 00:57:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:57.134 00:57:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:57.134 00:57:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:57.134 00:57:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.134 00:57:49 -- common/autotest_common.sh@10 -- # set +x 00:23:57.702 nvme0n1 00:23:57.702 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.702 00:57:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.702 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.702 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.702 00:57:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:57.702 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.702 00:57:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.702 00:57:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.702 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.702 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.702 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.702 00:57:50 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:23:57.702 00:57:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:57.702 00:57:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:57.702 00:57:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:57.702 00:57:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:57.702 00:57:50 -- host/auth.sh@44 -- # digest=sha384 00:23:57.702 00:57:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:57.702 00:57:50 -- host/auth.sh@44 -- # keyid=0 00:23:57.702 00:57:50 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:57.702 00:57:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:57.702 00:57:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:57.702 00:57:50 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:57.702 00:57:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:23:57.702 00:57:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:57.702 00:57:50 -- host/auth.sh@68 -- # digest=sha384 00:23:57.702 00:57:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:57.702 00:57:50 -- host/auth.sh@68 -- # keyid=0 00:23:57.702 00:57:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:57.702 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.702 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.702 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.702 00:57:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:57.702 00:57:50 -- nvmf/common.sh@717 -- # local ip 00:23:57.702 00:57:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:57.702 00:57:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:57.702 00:57:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.702 00:57:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.702 00:57:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:57.702 00:57:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.702 00:57:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:57.702 00:57:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:57.702 00:57:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:57.702 00:57:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:57.702 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.702 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 nvme0n1 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.960 00:57:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:57.960 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.960 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:57.960 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.960 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:57.960 00:57:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:57.960 00:57:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:57.960 00:57:50 -- host/auth.sh@44 -- # digest=sha384 00:23:57.960 00:57:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:57.960 00:57:50 -- host/auth.sh@44 -- # keyid=1 00:23:57.960 00:57:50 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:57.960 00:57:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:57.960 00:57:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:57.960 00:57:50 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:57.960 00:57:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:23:57.960 00:57:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:57.960 00:57:50 -- host/auth.sh@68 -- # digest=sha384 00:23:57.960 00:57:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:57.960 00:57:50 -- host/auth.sh@68 -- # keyid=1 00:23:57.960 00:57:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:57.960 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.960 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:57.960 00:57:50 -- nvmf/common.sh@717 -- # local ip 00:23:57.960 00:57:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:57.960 00:57:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:57.960 00:57:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:57.960 00:57:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:57.960 00:57:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:57.960 00:57:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:57.960 00:57:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:57.960 00:57:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:57.960 00:57:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:57.960 00:57:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:57.960 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.960 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 nvme0n1 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:57.960 00:57:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:57.960 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:57.960 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:57.960 00:57:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:57.960 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.218 00:57:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.218 00:57:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.218 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.218 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.218 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.218 00:57:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.218 00:57:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:58.218 00:57:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.218 00:57:50 -- host/auth.sh@44 -- # digest=sha384 00:23:58.218 00:57:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.218 00:57:50 -- host/auth.sh@44 -- # keyid=2 00:23:58.218 00:57:50 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:58.218 00:57:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.218 00:57:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.218 00:57:50 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:58.218 00:57:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:23:58.218 00:57:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.218 00:57:50 -- host/auth.sh@68 -- # digest=sha384 00:23:58.218 00:57:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.218 00:57:50 -- host/auth.sh@68 -- # keyid=2 00:23:58.218 00:57:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.218 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.219 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.219 00:57:50 -- nvmf/common.sh@717 -- # local ip 00:23:58.219 00:57:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.219 00:57:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.219 00:57:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.219 00:57:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:58.219 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.219 nvme0n1 00:23:58.219 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.219 00:57:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.219 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.219 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.219 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.219 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.219 00:57:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:58.219 00:57:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.219 00:57:50 -- host/auth.sh@44 -- # digest=sha384 00:23:58.219 00:57:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.219 00:57:50 -- host/auth.sh@44 -- # keyid=3 00:23:58.219 00:57:50 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:58.219 00:57:50 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.219 00:57:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.219 00:57:50 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:58.219 00:57:50 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:23:58.219 00:57:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.219 00:57:50 -- host/auth.sh@68 -- # digest=sha384 00:23:58.219 00:57:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.219 00:57:50 -- host/auth.sh@68 -- # keyid=3 00:23:58.219 00:57:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.219 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.219 00:57:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.219 00:57:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.219 00:57:50 -- nvmf/common.sh@717 -- # local ip 00:23:58.219 00:57:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.219 00:57:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.219 00:57:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.219 00:57:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.219 00:57:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.219 00:57:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:58.219 00:57:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.219 00:57:50 -- common/autotest_common.sh@10 -- # set +x 00:23:58.477 nvme0n1 00:23:58.477 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.477 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.477 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.477 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.477 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.477 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.477 00:57:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.477 00:57:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.477 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.477 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.477 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.477 00:57:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.477 00:57:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:58.477 00:57:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.477 00:57:51 -- host/auth.sh@44 -- # digest=sha384 00:23:58.477 00:57:51 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:58.477 00:57:51 -- host/auth.sh@44 -- # keyid=4 00:23:58.477 00:57:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:58.477 00:57:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.477 00:57:51 -- host/auth.sh@48 -- # echo ffdhe2048 00:23:58.477 00:57:51 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:58.477 00:57:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:23:58.477 00:57:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.477 00:57:51 -- host/auth.sh@68 -- # digest=sha384 00:23:58.477 00:57:51 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:23:58.477 00:57:51 -- host/auth.sh@68 -- # keyid=4 00:23:58.477 00:57:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:58.477 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.477 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.477 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.477 00:57:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.477 00:57:51 -- nvmf/common.sh@717 -- # local ip 00:23:58.477 00:57:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.477 00:57:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.477 00:57:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.477 00:57:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.477 00:57:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.477 00:57:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.477 00:57:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.477 00:57:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.477 00:57:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.477 00:57:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:58.477 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.477 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 nvme0n1 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:58.737 00:57:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.737 00:57:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:58.737 00:57:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # digest=sha384 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # keyid=0 00:23:58.737 00:57:51 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:58.737 00:57:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.737 00:57:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:58.737 00:57:51 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:58.737 00:57:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:23:58.737 00:57:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.737 00:57:51 -- host/auth.sh@68 -- # digest=sha384 00:23:58.737 00:57:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:58.737 00:57:51 -- host/auth.sh@68 -- # keyid=0 00:23:58.737 00:57:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.737 00:57:51 -- nvmf/common.sh@717 -- # local ip 00:23:58.737 00:57:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.737 00:57:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.737 00:57:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.737 00:57:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.737 00:57:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.737 00:57:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.737 00:57:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.737 00:57:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.737 00:57:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.737 00:57:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 nvme0n1 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.737 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.737 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.737 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.737 00:57:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.737 00:57:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:58.737 00:57:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # digest=sha384 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:58.737 00:57:51 -- host/auth.sh@44 -- # keyid=1 00:23:58.737 00:57:51 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:58.737 00:57:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.737 00:57:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:58.998 00:57:51 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:23:58.998 00:57:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:23:58.998 00:57:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # digest=sha384 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # keyid=1 00:23:58.998 00:57:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.998 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.998 00:57:51 -- nvmf/common.sh@717 -- # local ip 00:23:58.998 00:57:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.998 00:57:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.998 00:57:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.998 00:57:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.998 nvme0n1 00:23:58.998 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.998 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:58.998 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.998 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:58.998 00:57:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:58.998 00:57:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:58.998 00:57:51 -- host/auth.sh@44 -- # digest=sha384 00:23:58.998 00:57:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:58.998 00:57:51 -- host/auth.sh@44 -- # keyid=2 00:23:58.998 00:57:51 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:58.998 00:57:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:58.998 00:57:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:58.998 00:57:51 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:23:58.998 00:57:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:23:58.998 00:57:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # digest=sha384 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:58.998 00:57:51 -- host/auth.sh@68 -- # keyid=2 00:23:58.998 00:57:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:58.998 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:58.998 00:57:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:58.998 00:57:51 -- nvmf/common.sh@717 -- # local ip 00:23:58.998 00:57:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:58.998 00:57:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:58.998 00:57:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:58.998 00:57:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:58.998 00:57:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:58.998 00:57:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:58.998 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:58.998 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.258 nvme0n1 00:23:59.258 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.258 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.259 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.259 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.259 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.259 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.259 00:57:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.259 00:57:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.259 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.259 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.259 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.259 00:57:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.259 00:57:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:59.259 00:57:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.259 00:57:51 -- host/auth.sh@44 -- # digest=sha384 00:23:59.259 00:57:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.259 00:57:51 -- host/auth.sh@44 -- # keyid=3 00:23:59.259 00:57:51 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:59.259 00:57:51 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:59.259 00:57:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.259 00:57:51 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:23:59.259 00:57:51 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:23:59.259 00:57:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.259 00:57:51 -- host/auth.sh@68 -- # digest=sha384 00:23:59.259 00:57:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.259 00:57:51 -- host/auth.sh@68 -- # keyid=3 00:23:59.259 00:57:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.259 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.259 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.259 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.259 00:57:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.259 00:57:51 -- nvmf/common.sh@717 -- # local ip 00:23:59.259 00:57:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.259 00:57:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.259 00:57:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.259 00:57:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.259 00:57:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.259 00:57:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.259 00:57:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.259 00:57:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.259 00:57:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.259 00:57:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:23:59.259 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.259 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 nvme0n1 00:23:59.519 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.519 00:57:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.519 00:57:51 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 00:57:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.519 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.519 00:57:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:59.519 00:57:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.519 00:57:52 -- host/auth.sh@44 -- # digest=sha384 00:23:59.519 00:57:52 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:59.519 00:57:52 -- host/auth.sh@44 -- # keyid=4 00:23:59.519 00:57:52 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:59.519 00:57:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:59.519 00:57:52 -- host/auth.sh@48 -- # echo ffdhe3072 00:23:59.519 00:57:52 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:23:59.519 00:57:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:23:59.519 00:57:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.519 00:57:52 -- host/auth.sh@68 -- # digest=sha384 00:23:59.519 00:57:52 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:23:59.519 00:57:52 -- host/auth.sh@68 -- # keyid=4 00:23:59.519 00:57:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:59.519 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.519 00:57:52 -- nvmf/common.sh@717 -- # local ip 00:23:59.519 00:57:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.519 00:57:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.519 00:57:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.519 00:57:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.519 00:57:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.519 00:57:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.519 00:57:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.519 00:57:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.519 00:57:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.519 00:57:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:59.519 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 nvme0n1 00:23:59.519 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.519 00:57:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.519 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.519 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.519 00:57:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.519 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.519 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.777 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.778 00:57:52 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:23:59.778 00:57:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:23:59.778 00:57:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:59.778 00:57:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:23:59.778 00:57:52 -- host/auth.sh@44 -- # digest=sha384 00:23:59.778 00:57:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:59.778 00:57:52 -- host/auth.sh@44 -- # keyid=0 00:23:59.778 00:57:52 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:59.778 00:57:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:23:59.778 00:57:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:23:59.778 00:57:52 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:23:59.778 00:57:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:23:59.778 00:57:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:23:59.778 00:57:52 -- host/auth.sh@68 -- # digest=sha384 00:23:59.778 00:57:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:23:59.778 00:57:52 -- host/auth.sh@68 -- # keyid=0 00:23:59.778 00:57:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:59.778 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.778 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.778 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.778 00:57:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:23:59.778 00:57:52 -- nvmf/common.sh@717 -- # local ip 00:23:59.778 00:57:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:23:59.778 00:57:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:23:59.778 00:57:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:59.778 00:57:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:59.778 00:57:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:23:59.778 00:57:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:59.778 00:57:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:23:59.778 00:57:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:23:59.778 00:57:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:23:59.778 00:57:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:23:59.778 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.778 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.778 nvme0n1 00:23:59.778 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.778 00:57:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:23:59.778 00:57:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:23:59.778 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.778 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:23:59.778 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.778 00:57:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:59.778 00:57:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:59.778 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.778 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.058 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.058 00:57:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:00.058 00:57:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # digest=sha384 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # keyid=1 00:24:00.058 00:57:52 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:00.058 00:57:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:00.058 00:57:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.058 00:57:52 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:00.058 00:57:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:24:00.058 00:57:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.058 00:57:52 -- host/auth.sh@68 -- # digest=sha384 00:24:00.058 00:57:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.058 00:57:52 -- host/auth.sh@68 -- # keyid=1 00:24:00.058 00:57:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.058 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.058 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.058 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.058 00:57:52 -- nvmf/common.sh@717 -- # local ip 00:24:00.058 00:57:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.058 00:57:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.058 00:57:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.058 00:57:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.058 00:57:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.058 00:57:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.058 00:57:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.058 00:57:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.058 00:57:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.058 00:57:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:00.058 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.058 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.058 nvme0n1 00:24:00.058 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.058 00:57:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.058 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.058 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.058 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.058 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.058 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.058 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.058 00:57:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.058 00:57:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:00.058 00:57:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # digest=sha384 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.058 00:57:52 -- host/auth.sh@44 -- # keyid=2 00:24:00.058 00:57:52 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:00.058 00:57:52 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:00.058 00:57:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.317 00:57:52 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:00.317 00:57:52 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:24:00.317 00:57:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.317 00:57:52 -- host/auth.sh@68 -- # digest=sha384 00:24:00.317 00:57:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.317 00:57:52 -- host/auth.sh@68 -- # keyid=2 00:24:00.317 00:57:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.317 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.317 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.317 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.317 00:57:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.317 00:57:52 -- nvmf/common.sh@717 -- # local ip 00:24:00.317 00:57:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.317 00:57:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.317 00:57:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.317 00:57:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.317 00:57:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.317 00:57:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.317 00:57:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.317 00:57:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.317 00:57:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.317 00:57:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:00.317 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.317 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.317 nvme0n1 00:24:00.317 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.317 00:57:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.317 00:57:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.317 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.317 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.317 00:57:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.317 00:57:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.317 00:57:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.317 00:57:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.317 00:57:52 -- common/autotest_common.sh@10 -- # set +x 00:24:00.576 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.576 00:57:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.576 00:57:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:00.576 00:57:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.576 00:57:53 -- host/auth.sh@44 -- # digest=sha384 00:24:00.576 00:57:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.576 00:57:53 -- host/auth.sh@44 -- # keyid=3 00:24:00.576 00:57:53 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:00.576 00:57:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:00.576 00:57:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.576 00:57:53 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:00.576 00:57:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:24:00.576 00:57:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.576 00:57:53 -- host/auth.sh@68 -- # digest=sha384 00:24:00.576 00:57:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.576 00:57:53 -- host/auth.sh@68 -- # keyid=3 00:24:00.576 00:57:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.576 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.576 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.576 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.576 00:57:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.576 00:57:53 -- nvmf/common.sh@717 -- # local ip 00:24:00.576 00:57:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.576 00:57:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.576 00:57:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.576 00:57:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.576 00:57:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.576 00:57:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.576 00:57:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.576 00:57:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.576 00:57:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.576 00:57:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:00.576 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.576 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.576 nvme0n1 00:24:00.576 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.576 00:57:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.576 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.576 00:57:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.576 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.576 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.576 00:57:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.576 00:57:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.576 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.576 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.836 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.837 00:57:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:00.837 00:57:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:00.837 00:57:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:00.837 00:57:53 -- host/auth.sh@44 -- # digest=sha384 00:24:00.837 00:57:53 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:00.837 00:57:53 -- host/auth.sh@44 -- # keyid=4 00:24:00.837 00:57:53 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:00.837 00:57:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:00.837 00:57:53 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:00.837 00:57:53 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:00.837 00:57:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:24:00.837 00:57:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:00.837 00:57:53 -- host/auth.sh@68 -- # digest=sha384 00:24:00.837 00:57:53 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:00.837 00:57:53 -- host/auth.sh@68 -- # keyid=4 00:24:00.837 00:57:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:00.837 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.837 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.837 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.837 00:57:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:00.837 00:57:53 -- nvmf/common.sh@717 -- # local ip 00:24:00.837 00:57:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:00.837 00:57:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:00.837 00:57:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:00.837 00:57:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:00.837 00:57:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:00.837 00:57:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:00.837 00:57:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:00.837 00:57:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:00.837 00:57:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:00.837 00:57:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:00.837 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.837 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.837 nvme0n1 00:24:00.837 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.837 00:57:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:00.837 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.837 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:00.837 00:57:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:00.837 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:00.837 00:57:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:00.837 00:57:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:00.837 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:00.837 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.096 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.096 00:57:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:01.096 00:57:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.096 00:57:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:01.096 00:57:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.096 00:57:53 -- host/auth.sh@44 -- # digest=sha384 00:24:01.096 00:57:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.096 00:57:53 -- host/auth.sh@44 -- # keyid=0 00:24:01.096 00:57:53 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:01.096 00:57:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:01.096 00:57:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:01.096 00:57:53 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:01.096 00:57:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:24:01.096 00:57:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.096 00:57:53 -- host/auth.sh@68 -- # digest=sha384 00:24:01.096 00:57:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:01.096 00:57:53 -- host/auth.sh@68 -- # keyid=0 00:24:01.096 00:57:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:01.096 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.096 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.096 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.096 00:57:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.096 00:57:53 -- nvmf/common.sh@717 -- # local ip 00:24:01.096 00:57:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.096 00:57:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.096 00:57:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.096 00:57:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.096 00:57:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.096 00:57:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.096 00:57:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.096 00:57:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.096 00:57:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.096 00:57:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:01.096 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.096 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.355 nvme0n1 00:24:01.355 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.355 00:57:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.355 00:57:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.355 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.355 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.355 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.355 00:57:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.355 00:57:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.355 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.355 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.355 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.355 00:57:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.355 00:57:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:01.355 00:57:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.355 00:57:53 -- host/auth.sh@44 -- # digest=sha384 00:24:01.355 00:57:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.355 00:57:53 -- host/auth.sh@44 -- # keyid=1 00:24:01.355 00:57:53 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:01.355 00:57:53 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:01.355 00:57:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:01.355 00:57:53 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:01.355 00:57:53 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:24:01.355 00:57:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.355 00:57:53 -- host/auth.sh@68 -- # digest=sha384 00:24:01.355 00:57:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:01.355 00:57:53 -- host/auth.sh@68 -- # keyid=1 00:24:01.355 00:57:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:01.355 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.355 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.355 00:57:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.355 00:57:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.355 00:57:53 -- nvmf/common.sh@717 -- # local ip 00:24:01.355 00:57:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.355 00:57:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.356 00:57:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.356 00:57:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.356 00:57:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.356 00:57:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.356 00:57:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.356 00:57:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.356 00:57:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.356 00:57:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:01.356 00:57:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.356 00:57:53 -- common/autotest_common.sh@10 -- # set +x 00:24:01.613 nvme0n1 00:24:01.613 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.872 00:57:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:01.872 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.872 00:57:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:01.872 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:01.872 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.872 00:57:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:01.872 00:57:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:01.872 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.872 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:01.872 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.872 00:57:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:01.872 00:57:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:01.872 00:57:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:01.872 00:57:54 -- host/auth.sh@44 -- # digest=sha384 00:24:01.872 00:57:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:01.872 00:57:54 -- host/auth.sh@44 -- # keyid=2 00:24:01.872 00:57:54 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:01.872 00:57:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:01.872 00:57:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:01.872 00:57:54 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:01.872 00:57:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:24:01.872 00:57:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:01.872 00:57:54 -- host/auth.sh@68 -- # digest=sha384 00:24:01.872 00:57:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:01.872 00:57:54 -- host/auth.sh@68 -- # keyid=2 00:24:01.872 00:57:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:01.872 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.872 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:01.872 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:01.872 00:57:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:01.872 00:57:54 -- nvmf/common.sh@717 -- # local ip 00:24:01.872 00:57:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:01.872 00:57:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:01.872 00:57:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:01.872 00:57:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:01.872 00:57:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:01.872 00:57:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:01.872 00:57:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:01.872 00:57:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:01.872 00:57:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:01.872 00:57:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:01.872 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:01.872 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.130 nvme0n1 00:24:02.130 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.130 00:57:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.130 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.130 00:57:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.130 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.130 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.130 00:57:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.130 00:57:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.130 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.130 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.130 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.130 00:57:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.130 00:57:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:02.130 00:57:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.130 00:57:54 -- host/auth.sh@44 -- # digest=sha384 00:24:02.130 00:57:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.130 00:57:54 -- host/auth.sh@44 -- # keyid=3 00:24:02.130 00:57:54 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:02.130 00:57:54 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:02.130 00:57:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:02.130 00:57:54 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:02.130 00:57:54 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:24:02.130 00:57:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.130 00:57:54 -- host/auth.sh@68 -- # digest=sha384 00:24:02.130 00:57:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:02.130 00:57:54 -- host/auth.sh@68 -- # keyid=3 00:24:02.130 00:57:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:02.130 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.130 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.130 00:57:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.130 00:57:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.130 00:57:54 -- nvmf/common.sh@717 -- # local ip 00:24:02.130 00:57:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.130 00:57:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.130 00:57:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.130 00:57:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.130 00:57:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:02.130 00:57:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.130 00:57:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:02.130 00:57:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:02.130 00:57:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:02.130 00:57:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:02.130 00:57:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.130 00:57:54 -- common/autotest_common.sh@10 -- # set +x 00:24:02.388 nvme0n1 00:24:02.388 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.648 00:57:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.648 00:57:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.648 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.648 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.648 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.648 00:57:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.648 00:57:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.648 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.648 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.648 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.648 00:57:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.648 00:57:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:02.648 00:57:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.648 00:57:55 -- host/auth.sh@44 -- # digest=sha384 00:24:02.648 00:57:55 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:02.648 00:57:55 -- host/auth.sh@44 -- # keyid=4 00:24:02.648 00:57:55 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:02.648 00:57:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:02.648 00:57:55 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:02.648 00:57:55 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:02.648 00:57:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:24:02.648 00:57:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.648 00:57:55 -- host/auth.sh@68 -- # digest=sha384 00:24:02.648 00:57:55 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:02.648 00:57:55 -- host/auth.sh@68 -- # keyid=4 00:24:02.648 00:57:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:02.648 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.648 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.648 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.648 00:57:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.648 00:57:55 -- nvmf/common.sh@717 -- # local ip 00:24:02.648 00:57:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.648 00:57:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.648 00:57:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.648 00:57:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.648 00:57:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:02.648 00:57:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.648 00:57:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:02.648 00:57:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:02.648 00:57:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:02.648 00:57:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:02.648 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.648 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.907 nvme0n1 00:24:02.907 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.907 00:57:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:02.907 00:57:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:02.907 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.907 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.907 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.907 00:57:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:02.907 00:57:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:02.907 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.907 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.907 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.907 00:57:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:02.907 00:57:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:02.907 00:57:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:02.907 00:57:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:02.907 00:57:55 -- host/auth.sh@44 -- # digest=sha384 00:24:02.907 00:57:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:02.907 00:57:55 -- host/auth.sh@44 -- # keyid=0 00:24:02.907 00:57:55 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:02.907 00:57:55 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:02.907 00:57:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:02.907 00:57:55 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:02.907 00:57:55 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:24:02.907 00:57:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:02.907 00:57:55 -- host/auth.sh@68 -- # digest=sha384 00:24:02.907 00:57:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:02.907 00:57:55 -- host/auth.sh@68 -- # keyid=0 00:24:02.907 00:57:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:02.907 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.907 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:02.907 00:57:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:02.907 00:57:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:02.907 00:57:55 -- nvmf/common.sh@717 -- # local ip 00:24:02.907 00:57:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:02.907 00:57:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:02.907 00:57:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:02.907 00:57:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:02.907 00:57:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:02.907 00:57:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:02.907 00:57:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:02.907 00:57:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:02.907 00:57:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:02.907 00:57:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:02.907 00:57:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:02.907 00:57:55 -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 nvme0n1 00:24:03.478 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.478 00:57:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:03.478 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.478 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:03.478 00:57:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:03.478 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.478 00:57:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:03.478 00:57:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:03.478 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.478 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:03.736 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.736 00:57:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:03.736 00:57:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:03.736 00:57:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:03.736 00:57:56 -- host/auth.sh@44 -- # digest=sha384 00:24:03.736 00:57:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:03.736 00:57:56 -- host/auth.sh@44 -- # keyid=1 00:24:03.736 00:57:56 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:03.736 00:57:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:03.736 00:57:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:03.736 00:57:56 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:03.736 00:57:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:24:03.736 00:57:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:03.737 00:57:56 -- host/auth.sh@68 -- # digest=sha384 00:24:03.737 00:57:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:03.737 00:57:56 -- host/auth.sh@68 -- # keyid=1 00:24:03.737 00:57:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:03.737 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.737 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:03.737 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:03.737 00:57:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:03.737 00:57:56 -- nvmf/common.sh@717 -- # local ip 00:24:03.737 00:57:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:03.737 00:57:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:03.737 00:57:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:03.737 00:57:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:03.737 00:57:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:03.737 00:57:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:03.737 00:57:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:03.737 00:57:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:03.737 00:57:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:03.737 00:57:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:03.737 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:03.737 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.302 nvme0n1 00:24:04.302 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.302 00:57:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.302 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.302 00:57:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:04.302 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.302 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.302 00:57:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.302 00:57:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.302 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.302 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.302 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.302 00:57:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.302 00:57:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:04.302 00:57:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.302 00:57:56 -- host/auth.sh@44 -- # digest=sha384 00:24:04.302 00:57:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.302 00:57:56 -- host/auth.sh@44 -- # keyid=2 00:24:04.302 00:57:56 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:04.302 00:57:56 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:04.302 00:57:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:04.302 00:57:56 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:04.302 00:57:56 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:24:04.302 00:57:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:04.302 00:57:56 -- host/auth.sh@68 -- # digest=sha384 00:24:04.302 00:57:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:04.302 00:57:56 -- host/auth.sh@68 -- # keyid=2 00:24:04.302 00:57:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:04.302 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.302 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.302 00:57:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.302 00:57:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:04.302 00:57:56 -- nvmf/common.sh@717 -- # local ip 00:24:04.302 00:57:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:04.302 00:57:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:04.302 00:57:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.302 00:57:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.302 00:57:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:04.302 00:57:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.302 00:57:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:04.302 00:57:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:04.302 00:57:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:04.302 00:57:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:04.302 00:57:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.302 00:57:56 -- common/autotest_common.sh@10 -- # set +x 00:24:04.872 nvme0n1 00:24:04.872 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.872 00:57:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:04.872 00:57:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:04.872 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.872 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.872 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.872 00:57:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:04.872 00:57:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:04.872 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.872 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.872 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.872 00:57:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:04.872 00:57:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:04.872 00:57:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:04.872 00:57:57 -- host/auth.sh@44 -- # digest=sha384 00:24:04.872 00:57:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:04.872 00:57:57 -- host/auth.sh@44 -- # keyid=3 00:24:04.872 00:57:57 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:04.872 00:57:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:04.872 00:57:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:04.872 00:57:57 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:04.872 00:57:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:24:04.872 00:57:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:04.872 00:57:57 -- host/auth.sh@68 -- # digest=sha384 00:24:04.872 00:57:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:04.872 00:57:57 -- host/auth.sh@68 -- # keyid=3 00:24:04.872 00:57:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:04.872 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.872 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:04.872 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:04.872 00:57:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:04.872 00:57:57 -- nvmf/common.sh@717 -- # local ip 00:24:04.872 00:57:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:04.872 00:57:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:04.872 00:57:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:04.872 00:57:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:04.872 00:57:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:04.872 00:57:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:04.872 00:57:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:04.872 00:57:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:04.872 00:57:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:04.872 00:57:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:04.872 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:04.872 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.445 nvme0n1 00:24:05.445 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.445 00:57:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:05.445 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.445 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.445 00:57:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:05.445 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.445 00:57:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:05.445 00:57:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:05.445 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.445 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.445 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.445 00:57:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:05.445 00:57:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:05.445 00:57:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:05.445 00:57:57 -- host/auth.sh@44 -- # digest=sha384 00:24:05.445 00:57:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:05.445 00:57:57 -- host/auth.sh@44 -- # keyid=4 00:24:05.445 00:57:57 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:05.445 00:57:57 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:24:05.445 00:57:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:05.445 00:57:57 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:05.445 00:57:57 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:24:05.445 00:57:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:05.445 00:57:57 -- host/auth.sh@68 -- # digest=sha384 00:24:05.445 00:57:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:05.445 00:57:57 -- host/auth.sh@68 -- # keyid=4 00:24:05.445 00:57:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:05.445 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.445 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:05.445 00:57:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:05.445 00:57:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:05.445 00:57:57 -- nvmf/common.sh@717 -- # local ip 00:24:05.445 00:57:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:05.445 00:57:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:05.445 00:57:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:05.445 00:57:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:05.445 00:57:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:05.445 00:57:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:05.445 00:57:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:05.445 00:57:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:05.445 00:57:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:05.445 00:57:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:05.445 00:57:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:05.445 00:57:57 -- common/autotest_common.sh@10 -- # set +x 00:24:06.012 nvme0n1 00:24:06.012 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.012 00:57:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.012 00:57:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.012 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.012 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.012 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.012 00:57:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.012 00:57:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.012 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.012 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.012 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.012 00:57:58 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:24:06.012 00:57:58 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:06.012 00:57:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.012 00:57:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:06.012 00:57:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.012 00:57:58 -- host/auth.sh@44 -- # digest=sha512 00:24:06.012 00:57:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.012 00:57:58 -- host/auth.sh@44 -- # keyid=0 00:24:06.012 00:57:58 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:06.012 00:57:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.012 00:57:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.013 00:57:58 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:06.013 00:57:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:24:06.013 00:57:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.013 00:57:58 -- host/auth.sh@68 -- # digest=sha512 00:24:06.013 00:57:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.013 00:57:58 -- host/auth.sh@68 -- # keyid=0 00:24:06.013 00:57:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.013 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.013 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.013 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.013 00:57:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.013 00:57:58 -- nvmf/common.sh@717 -- # local ip 00:24:06.013 00:57:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.013 00:57:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.013 00:57:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.013 00:57:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.013 00:57:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.013 00:57:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.013 00:57:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.013 00:57:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.013 00:57:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.013 00:57:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:06.013 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.013 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 nvme0n1 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.272 00:57:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:06.272 00:57:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # digest=sha512 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # keyid=1 00:24:06.272 00:57:58 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:06.272 00:57:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.272 00:57:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:06.272 00:57:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:24:06.272 00:57:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # digest=sha512 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # keyid=1 00:24:06.272 00:57:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.272 00:57:58 -- nvmf/common.sh@717 -- # local ip 00:24:06.272 00:57:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.272 00:57:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.272 00:57:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.272 00:57:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.272 00:57:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.272 00:57:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.272 00:57:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.272 00:57:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.272 00:57:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.272 00:57:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 nvme0n1 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.272 00:57:58 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.272 00:57:58 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:06.272 00:57:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # digest=sha512 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@44 -- # keyid=2 00:24:06.272 00:57:58 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:06.272 00:57:58 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.272 00:57:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:06.272 00:57:58 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:24:06.272 00:57:58 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # digest=sha512 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.272 00:57:58 -- host/auth.sh@68 -- # keyid=2 00:24:06.272 00:57:58 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.272 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.272 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.272 00:57:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.532 00:57:58 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.532 00:57:58 -- nvmf/common.sh@717 -- # local ip 00:24:06.532 00:57:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.532 00:57:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.532 00:57:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.532 00:57:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.532 00:57:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.532 00:57:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.532 00:57:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.532 00:57:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.532 00:57:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.532 00:57:58 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:06.532 00:57:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.532 00:57:58 -- common/autotest_common.sh@10 -- # set +x 00:24:06.532 nvme0n1 00:24:06.532 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.532 00:57:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.532 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.532 00:57:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.532 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.532 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.532 00:57:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.532 00:57:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.532 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.532 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.532 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.532 00:57:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.532 00:57:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:06.532 00:57:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.532 00:57:59 -- host/auth.sh@44 -- # digest=sha512 00:24:06.532 00:57:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.532 00:57:59 -- host/auth.sh@44 -- # keyid=3 00:24:06.532 00:57:59 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:06.532 00:57:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.532 00:57:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.532 00:57:59 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:06.532 00:57:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:24:06.532 00:57:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.532 00:57:59 -- host/auth.sh@68 -- # digest=sha512 00:24:06.532 00:57:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.532 00:57:59 -- host/auth.sh@68 -- # keyid=3 00:24:06.532 00:57:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.532 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.532 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.532 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.532 00:57:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.532 00:57:59 -- nvmf/common.sh@717 -- # local ip 00:24:06.532 00:57:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.532 00:57:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.532 00:57:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.532 00:57:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.532 00:57:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.532 00:57:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.532 00:57:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.532 00:57:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.532 00:57:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.532 00:57:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:06.532 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.532 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 nvme0n1 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:06.794 00:57:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:06.794 00:57:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:06.794 00:57:59 -- host/auth.sh@44 -- # digest=sha512 00:24:06.794 00:57:59 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:06.794 00:57:59 -- host/auth.sh@44 -- # keyid=4 00:24:06.794 00:57:59 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:06.794 00:57:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:06.794 00:57:59 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:06.794 00:57:59 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:06.794 00:57:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:24:06.794 00:57:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:06.794 00:57:59 -- host/auth.sh@68 -- # digest=sha512 00:24:06.794 00:57:59 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:24:06.794 00:57:59 -- host/auth.sh@68 -- # keyid=4 00:24:06.794 00:57:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:06.794 00:57:59 -- nvmf/common.sh@717 -- # local ip 00:24:06.794 00:57:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:06.794 00:57:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:06.794 00:57:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:06.794 00:57:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:06.794 00:57:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:06.794 00:57:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:06.794 00:57:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:06.794 00:57:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:06.794 00:57:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:06.794 00:57:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 nvme0n1 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:06.794 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:06.794 00:57:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:06.794 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:06.794 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.056 00:57:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.056 00:57:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:07.056 00:57:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # digest=sha512 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # keyid=0 00:24:07.056 00:57:59 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:07.056 00:57:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.056 00:57:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:07.056 00:57:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:24:07.056 00:57:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # digest=sha512 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # keyid=0 00:24:07.056 00:57:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.056 00:57:59 -- nvmf/common.sh@717 -- # local ip 00:24:07.056 00:57:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.056 00:57:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.056 00:57:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.056 00:57:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 nvme0n1 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.056 00:57:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:07.056 00:57:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # digest=sha512 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@44 -- # keyid=1 00:24:07.056 00:57:59 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:07.056 00:57:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.056 00:57:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:07.056 00:57:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:24:07.056 00:57:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # digest=sha512 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.056 00:57:59 -- host/auth.sh@68 -- # keyid=1 00:24:07.056 00:57:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.056 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.056 00:57:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.056 00:57:59 -- nvmf/common.sh@717 -- # local ip 00:24:07.056 00:57:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.056 00:57:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.056 00:57:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.056 00:57:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.056 00:57:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.056 00:57:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:07.056 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.056 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.317 nvme0n1 00:24:07.317 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.317 00:57:59 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.317 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.317 00:57:59 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.317 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.317 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.317 00:57:59 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.317 00:57:59 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.317 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.317 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.317 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.317 00:57:59 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.317 00:57:59 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:07.317 00:57:59 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.317 00:57:59 -- host/auth.sh@44 -- # digest=sha512 00:24:07.317 00:57:59 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.317 00:57:59 -- host/auth.sh@44 -- # keyid=2 00:24:07.317 00:57:59 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:07.317 00:57:59 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.317 00:57:59 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.317 00:57:59 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:07.317 00:57:59 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:24:07.317 00:57:59 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.317 00:57:59 -- host/auth.sh@68 -- # digest=sha512 00:24:07.317 00:57:59 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.317 00:57:59 -- host/auth.sh@68 -- # keyid=2 00:24:07.317 00:57:59 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.317 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.317 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.317 00:57:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.317 00:57:59 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.317 00:57:59 -- nvmf/common.sh@717 -- # local ip 00:24:07.317 00:57:59 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.317 00:57:59 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.317 00:57:59 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.317 00:57:59 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.317 00:57:59 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.317 00:57:59 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.317 00:57:59 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.317 00:57:59 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.317 00:57:59 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.317 00:57:59 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:07.317 00:57:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.317 00:57:59 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 nvme0n1 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.578 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.578 00:58:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.578 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.578 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.578 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.578 00:58:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:07.578 00:58:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.578 00:58:00 -- host/auth.sh@44 -- # digest=sha512 00:24:07.578 00:58:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.578 00:58:00 -- host/auth.sh@44 -- # keyid=3 00:24:07.578 00:58:00 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:07.578 00:58:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.578 00:58:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.578 00:58:00 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:07.578 00:58:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:24:07.578 00:58:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.578 00:58:00 -- host/auth.sh@68 -- # digest=sha512 00:24:07.578 00:58:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.578 00:58:00 -- host/auth.sh@68 -- # keyid=3 00:24:07.578 00:58:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.578 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.578 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.578 00:58:00 -- nvmf/common.sh@717 -- # local ip 00:24:07.578 00:58:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.578 00:58:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.578 00:58:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.578 00:58:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.578 00:58:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.578 00:58:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.578 00:58:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.578 00:58:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.578 00:58:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.578 00:58:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:07.578 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.578 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 nvme0n1 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.578 00:58:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.578 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.578 00:58:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.578 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.578 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.839 00:58:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:07.839 00:58:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # digest=sha512 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # keyid=4 00:24:07.839 00:58:00 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:07.839 00:58:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.839 00:58:00 -- host/auth.sh@48 -- # echo ffdhe3072 00:24:07.839 00:58:00 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:07.839 00:58:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:24:07.839 00:58:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # digest=sha512 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # keyid=4 00:24:07.839 00:58:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.839 00:58:00 -- nvmf/common.sh@717 -- # local ip 00:24:07.839 00:58:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.839 00:58:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.839 00:58:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.839 00:58:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 nvme0n1 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:07.839 00:58:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:07.839 00:58:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:07.839 00:58:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:07.839 00:58:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # digest=sha512 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:07.839 00:58:00 -- host/auth.sh@44 -- # keyid=0 00:24:07.839 00:58:00 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:07.839 00:58:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:07.839 00:58:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:07.839 00:58:00 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:07.839 00:58:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:24:07.839 00:58:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # digest=sha512 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:07.839 00:58:00 -- host/auth.sh@68 -- # keyid=0 00:24:07.839 00:58:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:07.839 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:07.839 00:58:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:07.839 00:58:00 -- nvmf/common.sh@717 -- # local ip 00:24:07.839 00:58:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:07.839 00:58:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:07.839 00:58:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:07.839 00:58:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:07.839 00:58:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:07.839 00:58:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:07.839 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:07.839 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.100 nvme0n1 00:24:08.100 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.100 00:58:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.100 00:58:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.100 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.100 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.100 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.100 00:58:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.100 00:58:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.100 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.100 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.100 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.100 00:58:00 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.100 00:58:00 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:24:08.100 00:58:00 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.100 00:58:00 -- host/auth.sh@44 -- # digest=sha512 00:24:08.100 00:58:00 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.100 00:58:00 -- host/auth.sh@44 -- # keyid=1 00:24:08.100 00:58:00 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:08.100 00:58:00 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:08.100 00:58:00 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:08.100 00:58:00 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:08.100 00:58:00 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:24:08.100 00:58:00 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.100 00:58:00 -- host/auth.sh@68 -- # digest=sha512 00:24:08.100 00:58:00 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:08.100 00:58:00 -- host/auth.sh@68 -- # keyid=1 00:24:08.100 00:58:00 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.100 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.100 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.100 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.100 00:58:00 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.100 00:58:00 -- nvmf/common.sh@717 -- # local ip 00:24:08.100 00:58:00 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.100 00:58:00 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.100 00:58:00 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.100 00:58:00 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.100 00:58:00 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.100 00:58:00 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.100 00:58:00 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.100 00:58:00 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.100 00:58:00 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.100 00:58:00 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:08.100 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.100 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.361 nvme0n1 00:24:08.361 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.361 00:58:00 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.361 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.361 00:58:00 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.361 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.361 00:58:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.361 00:58:00 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.361 00:58:00 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.361 00:58:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.361 00:58:00 -- common/autotest_common.sh@10 -- # set +x 00:24:08.361 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.361 00:58:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.361 00:58:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:24:08.361 00:58:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.361 00:58:01 -- host/auth.sh@44 -- # digest=sha512 00:24:08.361 00:58:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.361 00:58:01 -- host/auth.sh@44 -- # keyid=2 00:24:08.361 00:58:01 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:08.361 00:58:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:08.361 00:58:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:08.361 00:58:01 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:08.361 00:58:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:24:08.361 00:58:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.361 00:58:01 -- host/auth.sh@68 -- # digest=sha512 00:24:08.361 00:58:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:08.361 00:58:01 -- host/auth.sh@68 -- # keyid=2 00:24:08.361 00:58:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.361 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.361 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.361 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.361 00:58:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.361 00:58:01 -- nvmf/common.sh@717 -- # local ip 00:24:08.361 00:58:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.361 00:58:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.361 00:58:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.361 00:58:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.361 00:58:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.361 00:58:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.361 00:58:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.361 00:58:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.361 00:58:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.361 00:58:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:08.361 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.361 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.620 nvme0n1 00:24:08.620 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.620 00:58:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.620 00:58:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.620 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.620 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.620 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.620 00:58:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.620 00:58:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.620 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.620 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.620 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.620 00:58:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.620 00:58:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:24:08.620 00:58:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.620 00:58:01 -- host/auth.sh@44 -- # digest=sha512 00:24:08.620 00:58:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.620 00:58:01 -- host/auth.sh@44 -- # keyid=3 00:24:08.620 00:58:01 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:08.620 00:58:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:08.620 00:58:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:08.620 00:58:01 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:08.620 00:58:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:24:08.620 00:58:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.620 00:58:01 -- host/auth.sh@68 -- # digest=sha512 00:24:08.620 00:58:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:08.620 00:58:01 -- host/auth.sh@68 -- # keyid=3 00:24:08.620 00:58:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.620 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.620 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.620 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.620 00:58:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.620 00:58:01 -- nvmf/common.sh@717 -- # local ip 00:24:08.620 00:58:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.620 00:58:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.620 00:58:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.620 00:58:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.620 00:58:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.620 00:58:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.620 00:58:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.620 00:58:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.620 00:58:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.620 00:58:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:08.620 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.620 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.878 nvme0n1 00:24:08.878 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.878 00:58:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:08.878 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.878 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.878 00:58:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:08.878 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.878 00:58:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:08.878 00:58:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:08.878 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.878 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.878 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.878 00:58:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:08.878 00:58:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:24:08.878 00:58:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:08.878 00:58:01 -- host/auth.sh@44 -- # digest=sha512 00:24:08.878 00:58:01 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:08.878 00:58:01 -- host/auth.sh@44 -- # keyid=4 00:24:08.878 00:58:01 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:08.878 00:58:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:08.878 00:58:01 -- host/auth.sh@48 -- # echo ffdhe4096 00:24:08.878 00:58:01 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:08.878 00:58:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:24:08.878 00:58:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:08.878 00:58:01 -- host/auth.sh@68 -- # digest=sha512 00:24:08.878 00:58:01 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:24:08.878 00:58:01 -- host/auth.sh@68 -- # keyid=4 00:24:08.878 00:58:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:08.878 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.878 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:08.878 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:08.878 00:58:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:08.878 00:58:01 -- nvmf/common.sh@717 -- # local ip 00:24:08.878 00:58:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:08.878 00:58:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:08.878 00:58:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:08.878 00:58:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:08.878 00:58:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:08.878 00:58:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:08.878 00:58:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:08.878 00:58:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:08.878 00:58:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:08.878 00:58:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:08.878 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:08.878 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.136 nvme0n1 00:24:09.136 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.136 00:58:01 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.136 00:58:01 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.136 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.136 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.136 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.136 00:58:01 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.136 00:58:01 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.136 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.136 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.136 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.136 00:58:01 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:09.136 00:58:01 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.136 00:58:01 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:24:09.136 00:58:01 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.136 00:58:01 -- host/auth.sh@44 -- # digest=sha512 00:24:09.136 00:58:01 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.136 00:58:01 -- host/auth.sh@44 -- # keyid=0 00:24:09.136 00:58:01 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:09.136 00:58:01 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:09.136 00:58:01 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:09.136 00:58:01 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:09.136 00:58:01 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:24:09.136 00:58:01 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.136 00:58:01 -- host/auth.sh@68 -- # digest=sha512 00:24:09.136 00:58:01 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:09.136 00:58:01 -- host/auth.sh@68 -- # keyid=0 00:24:09.136 00:58:01 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:09.136 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.136 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.136 00:58:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.136 00:58:01 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.137 00:58:01 -- nvmf/common.sh@717 -- # local ip 00:24:09.137 00:58:01 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.137 00:58:01 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.137 00:58:01 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.137 00:58:01 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.137 00:58:01 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.137 00:58:01 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.137 00:58:01 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.137 00:58:01 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.137 00:58:01 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.137 00:58:01 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:09.137 00:58:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.137 00:58:01 -- common/autotest_common.sh@10 -- # set +x 00:24:09.704 nvme0n1 00:24:09.704 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.704 00:58:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.704 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.704 00:58:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.704 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.704 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.704 00:58:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.704 00:58:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.704 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.704 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.704 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.704 00:58:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.704 00:58:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:24:09.704 00:58:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.705 00:58:02 -- host/auth.sh@44 -- # digest=sha512 00:24:09.705 00:58:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.705 00:58:02 -- host/auth.sh@44 -- # keyid=1 00:24:09.705 00:58:02 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:09.705 00:58:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:09.705 00:58:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:09.705 00:58:02 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:09.705 00:58:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:24:09.705 00:58:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.705 00:58:02 -- host/auth.sh@68 -- # digest=sha512 00:24:09.705 00:58:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:09.705 00:58:02 -- host/auth.sh@68 -- # keyid=1 00:24:09.705 00:58:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:09.705 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.705 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.705 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.705 00:58:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.705 00:58:02 -- nvmf/common.sh@717 -- # local ip 00:24:09.705 00:58:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.705 00:58:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.705 00:58:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.705 00:58:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.705 00:58:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.705 00:58:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.705 00:58:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.705 00:58:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.705 00:58:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.705 00:58:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:09.705 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.705 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.963 nvme0n1 00:24:09.963 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.963 00:58:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:09.963 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.963 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.963 00:58:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:09.963 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.963 00:58:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:09.963 00:58:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:09.963 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.963 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.963 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.963 00:58:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:09.963 00:58:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:24:09.963 00:58:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:09.963 00:58:02 -- host/auth.sh@44 -- # digest=sha512 00:24:09.963 00:58:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:09.963 00:58:02 -- host/auth.sh@44 -- # keyid=2 00:24:09.963 00:58:02 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:09.963 00:58:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:09.963 00:58:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:09.963 00:58:02 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:09.963 00:58:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:24:09.963 00:58:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:09.963 00:58:02 -- host/auth.sh@68 -- # digest=sha512 00:24:09.963 00:58:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:09.963 00:58:02 -- host/auth.sh@68 -- # keyid=2 00:24:09.963 00:58:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:09.963 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.963 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:09.963 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:09.963 00:58:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:09.963 00:58:02 -- nvmf/common.sh@717 -- # local ip 00:24:09.963 00:58:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:09.963 00:58:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:09.963 00:58:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:09.963 00:58:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:09.963 00:58:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:09.963 00:58:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:09.963 00:58:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:09.963 00:58:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:09.963 00:58:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:09.963 00:58:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:09.963 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:09.963 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.528 nvme0n1 00:24:10.528 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.528 00:58:02 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.528 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.528 00:58:02 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.528 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.528 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.528 00:58:02 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.528 00:58:02 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.528 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.528 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.528 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.528 00:58:02 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.528 00:58:02 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:24:10.528 00:58:02 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.528 00:58:02 -- host/auth.sh@44 -- # digest=sha512 00:24:10.528 00:58:02 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.528 00:58:02 -- host/auth.sh@44 -- # keyid=3 00:24:10.528 00:58:02 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:10.528 00:58:02 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:10.528 00:58:02 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:10.528 00:58:02 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:10.528 00:58:02 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:24:10.528 00:58:02 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.528 00:58:02 -- host/auth.sh@68 -- # digest=sha512 00:24:10.528 00:58:02 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:10.528 00:58:02 -- host/auth.sh@68 -- # keyid=3 00:24:10.528 00:58:02 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.528 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.528 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.528 00:58:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.528 00:58:02 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.528 00:58:02 -- nvmf/common.sh@717 -- # local ip 00:24:10.528 00:58:02 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.528 00:58:02 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.528 00:58:02 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.528 00:58:02 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.528 00:58:02 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.528 00:58:02 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.528 00:58:02 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.528 00:58:02 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.528 00:58:02 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.528 00:58:02 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:10.528 00:58:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.528 00:58:02 -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 nvme0n1 00:24:10.785 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.785 00:58:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:10.785 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.785 00:58:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:10.785 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.785 00:58:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:10.785 00:58:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:10.785 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.785 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.785 00:58:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:10.785 00:58:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:24:10.785 00:58:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:10.785 00:58:03 -- host/auth.sh@44 -- # digest=sha512 00:24:10.785 00:58:03 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:10.785 00:58:03 -- host/auth.sh@44 -- # keyid=4 00:24:10.785 00:58:03 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:10.785 00:58:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:10.785 00:58:03 -- host/auth.sh@48 -- # echo ffdhe6144 00:24:10.785 00:58:03 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:10.785 00:58:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:24:10.785 00:58:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:10.785 00:58:03 -- host/auth.sh@68 -- # digest=sha512 00:24:10.785 00:58:03 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:24:10.785 00:58:03 -- host/auth.sh@68 -- # keyid=4 00:24:10.785 00:58:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:24:10.785 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.785 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:10.785 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:10.785 00:58:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:10.785 00:58:03 -- nvmf/common.sh@717 -- # local ip 00:24:10.785 00:58:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:10.785 00:58:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:10.785 00:58:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:10.785 00:58:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:10.785 00:58:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:10.785 00:58:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:10.785 00:58:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:10.785 00:58:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:10.785 00:58:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:10.785 00:58:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:10.785 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:10.785 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 nvme0n1 00:24:11.355 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.355 00:58:03 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.355 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.355 00:58:03 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.355 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.355 00:58:03 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.355 00:58:03 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.355 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.355 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.355 00:58:03 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:24:11.355 00:58:03 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.355 00:58:03 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:24:11.355 00:58:03 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.355 00:58:03 -- host/auth.sh@44 -- # digest=sha512 00:24:11.355 00:58:03 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.355 00:58:03 -- host/auth.sh@44 -- # keyid=0 00:24:11.355 00:58:03 -- host/auth.sh@45 -- # key=DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:11.355 00:58:03 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:11.355 00:58:03 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:11.355 00:58:03 -- host/auth.sh@49 -- # echo DHHC-1:00:N2U3ZmRhZWE5ZGFiZTZhOTNhMzAxNTI2NmI4M2ZhMjUNeyX5: 00:24:11.355 00:58:03 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:24:11.355 00:58:03 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.355 00:58:03 -- host/auth.sh@68 -- # digest=sha512 00:24:11.355 00:58:03 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:11.355 00:58:03 -- host/auth.sh@68 -- # keyid=0 00:24:11.355 00:58:03 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:11.355 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.355 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.355 00:58:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.355 00:58:03 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.355 00:58:03 -- nvmf/common.sh@717 -- # local ip 00:24:11.355 00:58:03 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.355 00:58:03 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.355 00:58:03 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.355 00:58:03 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.355 00:58:03 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.355 00:58:03 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.355 00:58:03 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.355 00:58:03 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.355 00:58:03 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.355 00:58:03 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:24:11.355 00:58:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.355 00:58:03 -- common/autotest_common.sh@10 -- # set +x 00:24:11.924 nvme0n1 00:24:11.924 00:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.924 00:58:04 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:11.924 00:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.924 00:58:04 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:11.924 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:24:11.924 00:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.924 00:58:04 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:11.924 00:58:04 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:11.924 00:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.924 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:24:11.924 00:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.924 00:58:04 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:11.924 00:58:04 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:24:11.924 00:58:04 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:11.924 00:58:04 -- host/auth.sh@44 -- # digest=sha512 00:24:11.924 00:58:04 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:11.924 00:58:04 -- host/auth.sh@44 -- # keyid=1 00:24:11.924 00:58:04 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:11.924 00:58:04 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:11.924 00:58:04 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:11.924 00:58:04 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:11.924 00:58:04 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:24:11.924 00:58:04 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:11.924 00:58:04 -- host/auth.sh@68 -- # digest=sha512 00:24:11.924 00:58:04 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:11.924 00:58:04 -- host/auth.sh@68 -- # keyid=1 00:24:11.924 00:58:04 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:11.924 00:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.924 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:24:11.924 00:58:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:11.924 00:58:04 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:11.924 00:58:04 -- nvmf/common.sh@717 -- # local ip 00:24:11.924 00:58:04 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:11.924 00:58:04 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:11.924 00:58:04 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:11.924 00:58:04 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:11.924 00:58:04 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:11.924 00:58:04 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:11.924 00:58:04 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:11.924 00:58:04 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:11.924 00:58:04 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:11.924 00:58:04 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:24:11.924 00:58:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:11.924 00:58:04 -- common/autotest_common.sh@10 -- # set +x 00:24:12.490 nvme0n1 00:24:12.490 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.490 00:58:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:12.490 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.490 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:12.490 00:58:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:12.490 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.490 00:58:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:12.490 00:58:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:12.490 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.490 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:12.490 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.490 00:58:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:12.490 00:58:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:24:12.490 00:58:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:12.490 00:58:05 -- host/auth.sh@44 -- # digest=sha512 00:24:12.490 00:58:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:12.490 00:58:05 -- host/auth.sh@44 -- # keyid=2 00:24:12.490 00:58:05 -- host/auth.sh@45 -- # key=DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:12.490 00:58:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:12.490 00:58:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:12.490 00:58:05 -- host/auth.sh@49 -- # echo DHHC-1:01:Y2YyNjdmMWVkYTJlMGU0ZGU4ZGVkOTAxNTY4NTUyYzSUU0h5: 00:24:12.490 00:58:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:24:12.490 00:58:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:12.490 00:58:05 -- host/auth.sh@68 -- # digest=sha512 00:24:12.490 00:58:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:12.490 00:58:05 -- host/auth.sh@68 -- # keyid=2 00:24:12.490 00:58:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:12.490 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.490 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:12.490 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:12.490 00:58:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:12.490 00:58:05 -- nvmf/common.sh@717 -- # local ip 00:24:12.490 00:58:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:12.490 00:58:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:12.490 00:58:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:12.490 00:58:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:12.490 00:58:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:12.490 00:58:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:12.490 00:58:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:12.490 00:58:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:12.490 00:58:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:12.490 00:58:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:12.490 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:12.490 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.058 nvme0n1 00:24:13.058 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.058 00:58:05 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.058 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.058 00:58:05 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.058 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.058 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.058 00:58:05 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.058 00:58:05 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.058 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.058 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.058 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.058 00:58:05 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.058 00:58:05 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:24:13.058 00:58:05 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.058 00:58:05 -- host/auth.sh@44 -- # digest=sha512 00:24:13.058 00:58:05 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.058 00:58:05 -- host/auth.sh@44 -- # keyid=3 00:24:13.058 00:58:05 -- host/auth.sh@45 -- # key=DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:13.058 00:58:05 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:13.058 00:58:05 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:13.058 00:58:05 -- host/auth.sh@49 -- # echo DHHC-1:02:MWFiYzFhYjY3ZmZkZmViMmVjZjFjY2M1ZDkxMDI3ODQ3Y2NkMzk4ZjBhMTlmYTBmUrtBxA==: 00:24:13.058 00:58:05 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:24:13.058 00:58:05 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.058 00:58:05 -- host/auth.sh@68 -- # digest=sha512 00:24:13.058 00:58:05 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:13.058 00:58:05 -- host/auth.sh@68 -- # keyid=3 00:24:13.058 00:58:05 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:13.058 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.058 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.058 00:58:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.058 00:58:05 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.058 00:58:05 -- nvmf/common.sh@717 -- # local ip 00:24:13.058 00:58:05 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.058 00:58:05 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.058 00:58:05 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.058 00:58:05 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.058 00:58:05 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.058 00:58:05 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.058 00:58:05 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.058 00:58:05 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.058 00:58:05 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.058 00:58:05 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:24:13.058 00:58:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.058 00:58:05 -- common/autotest_common.sh@10 -- # set +x 00:24:13.629 nvme0n1 00:24:13.629 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.629 00:58:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:13.629 00:58:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:13.629 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.629 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:13.629 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.629 00:58:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:13.629 00:58:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:13.629 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.629 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:13.629 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.629 00:58:06 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:24:13.629 00:58:06 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:24:13.629 00:58:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:13.629 00:58:06 -- host/auth.sh@44 -- # digest=sha512 00:24:13.629 00:58:06 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:13.629 00:58:06 -- host/auth.sh@44 -- # keyid=4 00:24:13.629 00:58:06 -- host/auth.sh@45 -- # key=DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:13.887 00:58:06 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:24:13.887 00:58:06 -- host/auth.sh@48 -- # echo ffdhe8192 00:24:13.887 00:58:06 -- host/auth.sh@49 -- # echo DHHC-1:03:YjIxNzFhNjUyZWNkYTI5MjkzZDRlOWJmNTFiM2Q0YzRkY2NlM2JmZjEzMmU2NDk4YWQ1NDRjNzhmMjMxOTU0MmNEfBk=: 00:24:13.887 00:58:06 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:24:13.887 00:58:06 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:24:13.887 00:58:06 -- host/auth.sh@68 -- # digest=sha512 00:24:13.887 00:58:06 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:24:13.887 00:58:06 -- host/auth.sh@68 -- # keyid=4 00:24:13.887 00:58:06 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:24:13.887 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.887 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:13.887 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:13.887 00:58:06 -- host/auth.sh@70 -- # get_main_ns_ip 00:24:13.887 00:58:06 -- nvmf/common.sh@717 -- # local ip 00:24:13.887 00:58:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:13.887 00:58:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:13.887 00:58:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:13.887 00:58:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:13.887 00:58:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:13.887 00:58:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:13.887 00:58:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:13.887 00:58:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:13.887 00:58:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:13.887 00:58:06 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:13.887 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:13.887 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 nvme0n1 00:24:14.452 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.452 00:58:06 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.452 00:58:06 -- host/auth.sh@73 -- # jq -r '.[].name' 00:24:14.452 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.452 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.452 00:58:06 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:14.452 00:58:06 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:14.452 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.452 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.452 00:58:06 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:14.452 00:58:06 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:24:14.452 00:58:06 -- host/auth.sh@44 -- # digest=sha256 00:24:14.452 00:58:06 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:14.452 00:58:06 -- host/auth.sh@44 -- # keyid=1 00:24:14.452 00:58:06 -- host/auth.sh@45 -- # key=DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:14.452 00:58:06 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:24:14.452 00:58:06 -- host/auth.sh@48 -- # echo ffdhe2048 00:24:14.452 00:58:06 -- host/auth.sh@49 -- # echo DHHC-1:00:ZTJkNGIyNTA5OWIwYWYxNWZhMWQ1ODkxZTcyMjQ2NWFhYWMwY2IxMDYwYTZhNjcyOJOZvQ==: 00:24:14.452 00:58:06 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:14.452 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.452 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 00:58:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.452 00:58:06 -- host/auth.sh@119 -- # get_main_ns_ip 00:24:14.452 00:58:06 -- nvmf/common.sh@717 -- # local ip 00:24:14.452 00:58:06 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.452 00:58:06 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.452 00:58:06 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.452 00:58:06 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.452 00:58:06 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.452 00:58:06 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.452 00:58:06 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.452 00:58:06 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.452 00:58:06 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.452 00:58:06 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.452 00:58:06 -- common/autotest_common.sh@638 -- # local es=0 00:24:14.452 00:58:06 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.452 00:58:06 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:14.452 00:58:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.452 00:58:06 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:14.452 00:58:06 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.452 00:58:06 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:24:14.452 00:58:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.452 00:58:06 -- common/autotest_common.sh@10 -- # set +x 00:24:14.452 request: 00:24:14.452 { 00:24:14.452 "name": "nvme0", 00:24:14.452 "trtype": "tcp", 00:24:14.452 "traddr": "10.0.0.1", 00:24:14.452 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:14.452 "adrfam": "ipv4", 00:24:14.452 "trsvcid": "4420", 00:24:14.452 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:14.452 "method": "bdev_nvme_attach_controller", 00:24:14.452 "req_id": 1 00:24:14.452 } 00:24:14.452 Got JSON-RPC error response 00:24:14.452 response: 00:24:14.452 { 00:24:14.452 "code": -32602, 00:24:14.452 "message": "Invalid parameters" 00:24:14.452 } 00:24:14.452 00:58:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:14.452 00:58:07 -- common/autotest_common.sh@641 -- # es=1 00:24:14.452 00:58:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:14.453 00:58:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:14.453 00:58:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:14.453 00:58:07 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.453 00:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.453 00:58:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.453 00:58:07 -- host/auth.sh@121 -- # jq length 00:24:14.453 00:58:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.453 00:58:07 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:24:14.453 00:58:07 -- host/auth.sh@124 -- # get_main_ns_ip 00:24:14.453 00:58:07 -- nvmf/common.sh@717 -- # local ip 00:24:14.453 00:58:07 -- nvmf/common.sh@718 -- # ip_candidates=() 00:24:14.453 00:58:07 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:24:14.453 00:58:07 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:14.453 00:58:07 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:14.453 00:58:07 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:24:14.453 00:58:07 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:14.453 00:58:07 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:24:14.453 00:58:07 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:24:14.453 00:58:07 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:24:14.453 00:58:07 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.453 00:58:07 -- common/autotest_common.sh@638 -- # local es=0 00:24:14.453 00:58:07 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.453 00:58:07 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:24:14.453 00:58:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.453 00:58:07 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:24:14.453 00:58:07 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:24:14.453 00:58:07 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:24:14.453 00:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.453 00:58:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.453 request: 00:24:14.453 { 00:24:14.453 "name": "nvme0", 00:24:14.453 "trtype": "tcp", 00:24:14.453 "traddr": "10.0.0.1", 00:24:14.453 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:24:14.453 "adrfam": "ipv4", 00:24:14.453 "trsvcid": "4420", 00:24:14.453 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:24:14.453 "dhchap_key": "key2", 00:24:14.453 "method": "bdev_nvme_attach_controller", 00:24:14.453 "req_id": 1 00:24:14.453 } 00:24:14.453 Got JSON-RPC error response 00:24:14.453 response: 00:24:14.453 { 00:24:14.453 "code": -32602, 00:24:14.453 "message": "Invalid parameters" 00:24:14.453 } 00:24:14.453 00:58:07 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:24:14.453 00:58:07 -- common/autotest_common.sh@641 -- # es=1 00:24:14.453 00:58:07 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:24:14.453 00:58:07 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:24:14.453 00:58:07 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:24:14.453 00:58:07 -- host/auth.sh@127 -- # jq length 00:24:14.453 00:58:07 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:24:14.453 00:58:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:14.453 00:58:07 -- common/autotest_common.sh@10 -- # set +x 00:24:14.453 00:58:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:14.453 00:58:07 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:24:14.453 00:58:07 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:24:14.453 00:58:07 -- host/auth.sh@130 -- # cleanup 00:24:14.453 00:58:07 -- host/auth.sh@24 -- # nvmftestfini 00:24:14.453 00:58:07 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:14.453 00:58:07 -- nvmf/common.sh@117 -- # sync 00:24:14.453 00:58:07 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:14.453 00:58:07 -- nvmf/common.sh@120 -- # set +e 00:24:14.453 00:58:07 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:14.453 00:58:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:14.453 rmmod nvme_tcp 00:24:14.453 rmmod nvme_fabrics 00:24:14.453 00:58:07 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:14.453 00:58:07 -- nvmf/common.sh@124 -- # set -e 00:24:14.453 00:58:07 -- nvmf/common.sh@125 -- # return 0 00:24:14.453 00:58:07 -- nvmf/common.sh@478 -- # '[' -n 2873413 ']' 00:24:14.453 00:58:07 -- nvmf/common.sh@479 -- # killprocess 2873413 00:24:14.453 00:58:07 -- common/autotest_common.sh@936 -- # '[' -z 2873413 ']' 00:24:14.453 00:58:07 -- common/autotest_common.sh@940 -- # kill -0 2873413 00:24:14.453 00:58:07 -- common/autotest_common.sh@941 -- # uname 00:24:14.453 00:58:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.712 00:58:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2873413 00:24:14.712 00:58:07 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:14.712 00:58:07 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:14.712 00:58:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2873413' 00:24:14.712 killing process with pid 2873413 00:24:14.712 00:58:07 -- common/autotest_common.sh@955 -- # kill 2873413 00:24:14.712 00:58:07 -- common/autotest_common.sh@960 -- # wait 2873413 00:24:15.056 00:58:07 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:15.056 00:58:07 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:15.056 00:58:07 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:15.056 00:58:07 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:15.056 00:58:07 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:15.056 00:58:07 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:15.056 00:58:07 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:15.056 00:58:07 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:16.992 00:58:09 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:16.992 00:58:09 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:16.992 00:58:09 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:16.992 00:58:09 -- host/auth.sh@27 -- # clean_kernel_target 00:24:16.992 00:58:09 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:24:16.992 00:58:09 -- nvmf/common.sh@675 -- # echo 0 00:24:16.992 00:58:09 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:16.992 00:58:09 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:16.992 00:58:09 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:16.992 00:58:09 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:16.992 00:58:09 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:24:16.992 00:58:09 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:24:17.252 00:58:09 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:24:19.786 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:19.786 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:19.786 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.043 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.043 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.043 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.043 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.043 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.043 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.043 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.043 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.043 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.301 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.301 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:24:20.301 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:24:20.301 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:24:21.680 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:24:21.940 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:24:22.209 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:24:22.209 00:58:14 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.TU3 /tmp/spdk.key-null.cOz /tmp/spdk.key-sha256.98a /tmp/spdk.key-sha384.gEk /tmp/spdk.key-sha512.6c5 /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/nvme-auth.log 00:24:22.209 00:58:14 -- host/auth.sh@31 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:24:24.742 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:24.742 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:24.742 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:24:24.742 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:24.742 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:24:24.742 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:24:25.001 00:24:25.001 real 0m47.205s 00:24:25.001 user 0m39.360s 00:24:25.001 sys 0m11.269s 00:24:25.001 00:58:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:25.001 00:58:17 -- common/autotest_common.sh@10 -- # set +x 00:24:25.001 ************************************ 00:24:25.001 END TEST nvmf_auth 00:24:25.001 ************************************ 00:24:25.001 00:58:17 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:24:25.001 00:58:17 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:25.001 00:58:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:25.001 00:58:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.001 00:58:17 -- common/autotest_common.sh@10 -- # set +x 00:24:25.001 ************************************ 00:24:25.001 START TEST nvmf_digest 00:24:25.001 ************************************ 00:24:25.001 00:58:17 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:24:25.260 * Looking for test storage... 00:24:25.260 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/host 00:24:25.260 00:58:17 -- host/digest.sh@12 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.260 00:58:17 -- nvmf/common.sh@7 -- # uname -s 00:24:25.260 00:58:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.260 00:58:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.260 00:58:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.260 00:58:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.260 00:58:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.260 00:58:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.260 00:58:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.260 00:58:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.260 00:58:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.260 00:58:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.260 00:58:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:24:25.260 00:58:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:24:25.260 00:58:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.260 00:58:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.260 00:58:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:24:25.260 00:58:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.260 00:58:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:24:25.260 00:58:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.260 00:58:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.260 00:58:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.260 00:58:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.260 00:58:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.260 00:58:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.260 00:58:17 -- paths/export.sh@5 -- # export PATH 00:24:25.260 00:58:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.260 00:58:17 -- nvmf/common.sh@47 -- # : 0 00:24:25.260 00:58:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.260 00:58:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.260 00:58:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.260 00:58:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.260 00:58:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.261 00:58:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.261 00:58:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.261 00:58:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.261 00:58:17 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:24:25.261 00:58:17 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:24:25.261 00:58:17 -- host/digest.sh@16 -- # runtime=2 00:24:25.261 00:58:17 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:24:25.261 00:58:17 -- host/digest.sh@138 -- # nvmftestinit 00:24:25.261 00:58:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:25.261 00:58:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.261 00:58:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:25.261 00:58:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:25.261 00:58:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:25.261 00:58:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.261 00:58:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.261 00:58:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.261 00:58:17 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:24:25.261 00:58:17 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:24:25.261 00:58:17 -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.261 00:58:17 -- common/autotest_common.sh@10 -- # set +x 00:24:30.536 00:58:22 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:30.536 00:58:22 -- nvmf/common.sh@291 -- # pci_devs=() 00:24:30.536 00:58:22 -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:30.536 00:58:22 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:30.536 00:58:22 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:30.536 00:58:22 -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:30.536 00:58:22 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:30.536 00:58:22 -- nvmf/common.sh@295 -- # net_devs=() 00:24:30.536 00:58:22 -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:30.536 00:58:22 -- nvmf/common.sh@296 -- # e810=() 00:24:30.536 00:58:22 -- nvmf/common.sh@296 -- # local -ga e810 00:24:30.536 00:58:22 -- nvmf/common.sh@297 -- # x722=() 00:24:30.536 00:58:22 -- nvmf/common.sh@297 -- # local -ga x722 00:24:30.536 00:58:22 -- nvmf/common.sh@298 -- # mlx=() 00:24:30.536 00:58:22 -- nvmf/common.sh@298 -- # local -ga mlx 00:24:30.536 00:58:22 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.537 00:58:22 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:30.537 00:58:22 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.537 00:58:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:24:30.537 Found 0000:27:00.0 (0x8086 - 0x159b) 00:24:30.537 00:58:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:30.537 00:58:22 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:24:30.537 Found 0000:27:00.1 (0x8086 - 0x159b) 00:24:30.537 00:58:22 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.537 00:58:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.537 00:58:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.537 00:58:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:24:30.537 Found net devices under 0000:27:00.0: cvl_0_0 00:24:30.537 00:58:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.537 00:58:22 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:30.537 00:58:22 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.537 00:58:22 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.537 00:58:22 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:24:30.537 Found net devices under 0000:27:00.1: cvl_0_1 00:24:30.537 00:58:22 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.537 00:58:22 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@403 -- # is_hw=yes 00:24:30.537 00:58:22 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:24:30.537 00:58:22 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:24:30.537 00:58:22 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.537 00:58:22 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.537 00:58:22 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:30.537 00:58:22 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:30.537 00:58:22 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:30.537 00:58:22 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:30.537 00:58:22 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:30.537 00:58:22 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:30.537 00:58:22 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.537 00:58:22 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:30.537 00:58:22 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:30.537 00:58:22 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:30.537 00:58:22 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:30.537 00:58:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:30.537 00:58:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:30.537 00:58:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:30.537 00:58:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:30.537 00:58:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:30.537 00:58:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:30.537 00:58:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:30.537 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.537 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:24:30.537 00:24:30.537 --- 10.0.0.2 ping statistics --- 00:24:30.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.537 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:24:30.537 00:58:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:30.537 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.537 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.379 ms 00:24:30.537 00:24:30.537 --- 10.0.0.1 ping statistics --- 00:24:30.537 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.537 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:24:30.537 00:58:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.537 00:58:23 -- nvmf/common.sh@411 -- # return 0 00:24:30.537 00:58:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:30.537 00:58:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.537 00:58:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:30.537 00:58:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:30.537 00:58:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.537 00:58:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:30.537 00:58:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:30.537 00:58:23 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:30.537 00:58:23 -- host/digest.sh@141 -- # [[ 1 -eq 1 ]] 00:24:30.537 00:58:23 -- host/digest.sh@142 -- # run_test nvmf_digest_dsa_initiator run_digest dsa_initiator 00:24:30.537 00:58:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:30.537 00:58:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:30.537 00:58:23 -- common/autotest_common.sh@10 -- # set +x 00:24:30.796 ************************************ 00:24:30.796 START TEST nvmf_digest_dsa_initiator 00:24:30.796 ************************************ 00:24:30.796 00:58:23 -- common/autotest_common.sh@1111 -- # run_digest dsa_initiator 00:24:30.796 00:58:23 -- host/digest.sh@120 -- # local dsa_initiator 00:24:30.796 00:58:23 -- host/digest.sh@121 -- # [[ dsa_initiator == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:24:30.796 00:58:23 -- host/digest.sh@121 -- # dsa_initiator=true 00:24:30.796 00:58:23 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:24:30.796 00:58:23 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:24:30.796 00:58:23 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:30.796 00:58:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:30.796 00:58:23 -- common/autotest_common.sh@10 -- # set +x 00:24:30.796 00:58:23 -- nvmf/common.sh@470 -- # nvmfpid=2888456 00:24:30.796 00:58:23 -- nvmf/common.sh@471 -- # waitforlisten 2888456 00:24:30.796 00:58:23 -- common/autotest_common.sh@817 -- # '[' -z 2888456 ']' 00:24:30.796 00:58:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.796 00:58:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:30.796 00:58:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.796 00:58:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:30.796 00:58:23 -- common/autotest_common.sh@10 -- # set +x 00:24:30.796 00:58:23 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:30.796 [2024-04-27 00:58:23.376127] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:30.796 [2024-04-27 00:58:23.376231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.796 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.055 [2024-04-27 00:58:23.497439] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.055 [2024-04-27 00:58:23.590962] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.055 [2024-04-27 00:58:23.590999] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.055 [2024-04-27 00:58:23.591011] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.055 [2024-04-27 00:58:23.591020] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.055 [2024-04-27 00:58:23.591028] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.055 [2024-04-27 00:58:23.591053] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:31.624 00:58:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:31.624 00:58:24 -- common/autotest_common.sh@850 -- # return 0 00:24:31.624 00:58:24 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:31.624 00:58:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:31.624 00:58:24 -- common/autotest_common.sh@10 -- # set +x 00:24:31.624 00:58:24 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:31.624 00:58:24 -- host/digest.sh@125 -- # [[ dsa_initiator == \d\s\a\_\t\a\r\g\e\t ]] 00:24:31.624 00:58:24 -- host/digest.sh@126 -- # common_target_config 00:24:31.624 00:58:24 -- host/digest.sh@43 -- # rpc_cmd 00:24:31.624 00:58:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:31.624 00:58:24 -- common/autotest_common.sh@10 -- # set +x 00:24:31.624 null0 00:24:31.624 [2024-04-27 00:58:24.262083] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:31.624 [2024-04-27 00:58:24.286234] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:31.624 00:58:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:31.624 00:58:24 -- host/digest.sh@128 -- # run_bperf randread 4096 128 true 00:24:31.624 00:58:24 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:31.624 00:58:24 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:31.624 00:58:24 -- host/digest.sh@80 -- # rw=randread 00:24:31.624 00:58:24 -- host/digest.sh@80 -- # bs=4096 00:24:31.624 00:58:24 -- host/digest.sh@80 -- # qd=128 00:24:31.624 00:58:24 -- host/digest.sh@80 -- # scan_dsa=true 00:24:31.624 00:58:24 -- host/digest.sh@83 -- # bperfpid=2888667 00:24:31.624 00:58:24 -- host/digest.sh@84 -- # waitforlisten 2888667 /var/tmp/bperf.sock 00:24:31.624 00:58:24 -- common/autotest_common.sh@817 -- # '[' -z 2888667 ']' 00:24:31.624 00:58:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:31.624 00:58:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:31.624 00:58:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:31.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:31.624 00:58:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:31.624 00:58:24 -- common/autotest_common.sh@10 -- # set +x 00:24:31.624 00:58:24 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:31.882 [2024-04-27 00:58:24.364826] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:31.882 [2024-04-27 00:58:24.364939] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2888667 ] 00:24:31.882 EAL: No free 2048 kB hugepages reported on node 1 00:24:31.882 [2024-04-27 00:58:24.505150] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.140 [2024-04-27 00:58:24.653582] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.399 00:58:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:32.399 00:58:25 -- common/autotest_common.sh@850 -- # return 0 00:24:32.399 00:58:25 -- host/digest.sh@86 -- # true 00:24:32.399 00:58:25 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:24:32.399 00:58:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:24:32.657 [2024-04-27 00:58:25.182481] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:24:32.657 00:58:25 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:32.657 00:58:25 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:40.779 00:58:32 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.779 00:58:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:40.779 nvme0n1 00:24:40.779 00:58:32 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:40.779 00:58:32 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:40.779 Running I/O for 2 seconds... 00:24:42.202 00:24:42.202 Latency(us) 00:24:42.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.202 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:42.202 nvme0n1 : 2.00 20190.83 78.87 0.00 0.00 6332.42 2949.12 15728.64 00:24:42.202 =================================================================================================================== 00:24:42.202 Total : 20190.83 78.87 0.00 0.00 6332.42 2949.12 15728.64 00:24:42.202 0 00:24:42.202 00:58:34 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:42.202 00:58:34 -- host/digest.sh@93 -- # get_accel_stats 00:24:42.202 00:58:34 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:42.202 00:58:34 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:42.202 00:58:34 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:42.202 | select(.opcode=="crc32c") 00:24:42.202 | "\(.module_name) \(.executed)"' 00:24:42.202 00:58:34 -- host/digest.sh@94 -- # true 00:24:42.202 00:58:34 -- host/digest.sh@94 -- # exp_module=dsa 00:24:42.202 00:58:34 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:42.202 00:58:34 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:24:42.202 00:58:34 -- host/digest.sh@98 -- # killprocess 2888667 00:24:42.202 00:58:34 -- common/autotest_common.sh@936 -- # '[' -z 2888667 ']' 00:24:42.202 00:58:34 -- common/autotest_common.sh@940 -- # kill -0 2888667 00:24:42.202 00:58:34 -- common/autotest_common.sh@941 -- # uname 00:24:42.202 00:58:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:42.202 00:58:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2888667 00:24:42.202 00:58:34 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:42.202 00:58:34 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:42.202 00:58:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2888667' 00:24:42.202 killing process with pid 2888667 00:24:42.202 00:58:34 -- common/autotest_common.sh@955 -- # kill 2888667 00:24:42.202 Received shutdown signal, test time was about 2.000000 seconds 00:24:42.202 00:24:42.202 Latency(us) 00:24:42.202 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:42.202 =================================================================================================================== 00:24:42.202 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:42.202 00:58:34 -- common/autotest_common.sh@960 -- # wait 2888667 00:24:44.773 00:58:37 -- host/digest.sh@129 -- # run_bperf randread 131072 16 true 00:24:44.773 00:58:37 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:44.773 00:58:37 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:44.773 00:58:37 -- host/digest.sh@80 -- # rw=randread 00:24:44.773 00:58:37 -- host/digest.sh@80 -- # bs=131072 00:24:44.773 00:58:37 -- host/digest.sh@80 -- # qd=16 00:24:44.773 00:58:37 -- host/digest.sh@80 -- # scan_dsa=true 00:24:44.773 00:58:37 -- host/digest.sh@83 -- # bperfpid=2891156 00:24:44.773 00:58:37 -- host/digest.sh@84 -- # waitforlisten 2891156 /var/tmp/bperf.sock 00:24:44.773 00:58:37 -- common/autotest_common.sh@817 -- # '[' -z 2891156 ']' 00:24:44.773 00:58:37 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:44.773 00:58:37 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:44.773 00:58:37 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:44.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:44.773 00:58:37 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:44.773 00:58:37 -- common/autotest_common.sh@10 -- # set +x 00:24:44.773 00:58:37 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:44.773 [2024-04-27 00:58:37.267593] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:44.773 [2024-04-27 00:58:37.267737] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2891156 ] 00:24:44.773 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:44.773 Zero copy mechanism will not be used. 00:24:44.773 EAL: No free 2048 kB hugepages reported on node 1 00:24:44.773 [2024-04-27 00:58:37.397651] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.034 [2024-04-27 00:58:37.494586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:45.295 00:58:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:45.295 00:58:37 -- common/autotest_common.sh@850 -- # return 0 00:24:45.295 00:58:37 -- host/digest.sh@86 -- # true 00:24:45.295 00:58:37 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:24:45.295 00:58:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:24:45.555 [2024-04-27 00:58:38.111202] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:24:45.555 00:58:38 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:45.555 00:58:38 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:53.678 00:58:45 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.678 00:58:45 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:53.678 nvme0n1 00:24:53.678 00:58:45 -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:53.678 00:58:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:53.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:53.678 Zero copy mechanism will not be used. 00:24:53.678 Running I/O for 2 seconds... 00:24:55.057 00:24:55.057 Latency(us) 00:24:55.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.057 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:55.057 nvme0n1 : 2.00 6915.09 864.39 0.00 0.00 2311.02 586.37 4794.48 00:24:55.057 =================================================================================================================== 00:24:55.057 Total : 6915.09 864.39 0.00 0.00 2311.02 586.37 4794.48 00:24:55.057 0 00:24:55.057 00:58:47 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:55.057 00:58:47 -- host/digest.sh@93 -- # get_accel_stats 00:24:55.057 00:58:47 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:55.057 00:58:47 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:55.057 | select(.opcode=="crc32c") 00:24:55.057 | "\(.module_name) \(.executed)"' 00:24:55.057 00:58:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:55.057 00:58:47 -- host/digest.sh@94 -- # true 00:24:55.057 00:58:47 -- host/digest.sh@94 -- # exp_module=dsa 00:24:55.057 00:58:47 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:55.057 00:58:47 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:24:55.057 00:58:47 -- host/digest.sh@98 -- # killprocess 2891156 00:24:55.057 00:58:47 -- common/autotest_common.sh@936 -- # '[' -z 2891156 ']' 00:24:55.057 00:58:47 -- common/autotest_common.sh@940 -- # kill -0 2891156 00:24:55.057 00:58:47 -- common/autotest_common.sh@941 -- # uname 00:24:55.057 00:58:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:55.057 00:58:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2891156 00:24:55.057 00:58:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:55.057 00:58:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:55.057 00:58:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2891156' 00:24:55.057 killing process with pid 2891156 00:24:55.057 00:58:47 -- common/autotest_common.sh@955 -- # kill 2891156 00:24:55.057 Received shutdown signal, test time was about 2.000000 seconds 00:24:55.057 00:24:55.057 Latency(us) 00:24:55.057 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.057 =================================================================================================================== 00:24:55.057 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:55.057 00:58:47 -- common/autotest_common.sh@960 -- # wait 2891156 00:24:57.591 00:58:49 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 true 00:24:57.591 00:58:49 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:57.591 00:58:49 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:57.591 00:58:49 -- host/digest.sh@80 -- # rw=randwrite 00:24:57.591 00:58:49 -- host/digest.sh@80 -- # bs=4096 00:24:57.591 00:58:49 -- host/digest.sh@80 -- # qd=128 00:24:57.591 00:58:49 -- host/digest.sh@80 -- # scan_dsa=true 00:24:57.591 00:58:49 -- host/digest.sh@83 -- # bperfpid=2893552 00:24:57.591 00:58:49 -- host/digest.sh@84 -- # waitforlisten 2893552 /var/tmp/bperf.sock 00:24:57.591 00:58:49 -- common/autotest_common.sh@817 -- # '[' -z 2893552 ']' 00:24:57.591 00:58:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:57.591 00:58:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:57.591 00:58:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:57.591 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:57.591 00:58:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:57.591 00:58:49 -- common/autotest_common.sh@10 -- # set +x 00:24:57.591 00:58:49 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:57.591 [2024-04-27 00:58:50.028794] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:24:57.591 [2024-04-27 00:58:50.028876] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893552 ] 00:24:57.591 EAL: No free 2048 kB hugepages reported on node 1 00:24:57.591 [2024-04-27 00:58:50.115605] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:57.591 [2024-04-27 00:58:50.205199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:58.160 00:58:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:58.160 00:58:50 -- common/autotest_common.sh@850 -- # return 0 00:24:58.160 00:58:50 -- host/digest.sh@86 -- # true 00:24:58.160 00:58:50 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:24:58.160 00:58:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:24:58.160 [2024-04-27 00:58:50.841704] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:24:58.160 00:58:50 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:58.160 00:58:50 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:06.282 00:58:57 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.282 00:58:57 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:06.282 nvme0n1 00:25:06.282 00:58:58 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:06.282 00:58:58 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:06.282 Running I/O for 2 seconds... 00:25:07.660 00:25:07.660 Latency(us) 00:25:07.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.660 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:07.660 nvme0n1 : 2.00 26424.50 103.22 0.00 0.00 4834.11 1845.36 9382.00 00:25:07.660 =================================================================================================================== 00:25:07.660 Total : 26424.50 103.22 0.00 0.00 4834.11 1845.36 9382.00 00:25:07.660 0 00:25:07.660 00:59:00 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:07.660 00:59:00 -- host/digest.sh@93 -- # get_accel_stats 00:25:07.660 00:59:00 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:07.660 00:59:00 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:07.660 | select(.opcode=="crc32c") 00:25:07.660 | "\(.module_name) \(.executed)"' 00:25:07.660 00:59:00 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:07.921 00:59:00 -- host/digest.sh@94 -- # true 00:25:07.921 00:59:00 -- host/digest.sh@94 -- # exp_module=dsa 00:25:07.921 00:59:00 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:07.921 00:59:00 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:25:07.921 00:59:00 -- host/digest.sh@98 -- # killprocess 2893552 00:25:07.921 00:59:00 -- common/autotest_common.sh@936 -- # '[' -z 2893552 ']' 00:25:07.921 00:59:00 -- common/autotest_common.sh@940 -- # kill -0 2893552 00:25:07.921 00:59:00 -- common/autotest_common.sh@941 -- # uname 00:25:07.921 00:59:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:07.921 00:59:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2893552 00:25:07.921 00:59:00 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:07.921 00:59:00 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:07.921 00:59:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2893552' 00:25:07.921 killing process with pid 2893552 00:25:07.921 00:59:00 -- common/autotest_common.sh@955 -- # kill 2893552 00:25:07.921 Received shutdown signal, test time was about 2.000000 seconds 00:25:07.921 00:25:07.921 Latency(us) 00:25:07.921 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:07.921 =================================================================================================================== 00:25:07.921 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:07.921 00:59:00 -- common/autotest_common.sh@960 -- # wait 2893552 00:25:10.472 00:59:02 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 true 00:25:10.472 00:59:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:10.472 00:59:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:10.472 00:59:02 -- host/digest.sh@80 -- # rw=randwrite 00:25:10.472 00:59:02 -- host/digest.sh@80 -- # bs=131072 00:25:10.472 00:59:02 -- host/digest.sh@80 -- # qd=16 00:25:10.472 00:59:02 -- host/digest.sh@80 -- # scan_dsa=true 00:25:10.472 00:59:02 -- host/digest.sh@83 -- # bperfpid=2895942 00:25:10.472 00:59:02 -- host/digest.sh@84 -- # waitforlisten 2895942 /var/tmp/bperf.sock 00:25:10.472 00:59:02 -- common/autotest_common.sh@817 -- # '[' -z 2895942 ']' 00:25:10.472 00:59:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:10.472 00:59:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:10.472 00:59:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:10.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:10.472 00:59:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:10.472 00:59:02 -- common/autotest_common.sh@10 -- # set +x 00:25:10.472 00:59:02 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:10.472 [2024-04-27 00:59:02.807155] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:10.472 [2024-04-27 00:59:02.807299] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895942 ] 00:25:10.472 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:10.472 Zero copy mechanism will not be used. 00:25:10.472 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.472 [2024-04-27 00:59:02.939994] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:10.472 [2024-04-27 00:59:03.036511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:11.040 00:59:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:11.040 00:59:03 -- common/autotest_common.sh@850 -- # return 0 00:25:11.040 00:59:03 -- host/digest.sh@86 -- # true 00:25:11.040 00:59:03 -- host/digest.sh@86 -- # bperf_rpc dsa_scan_accel_module 00:25:11.040 00:59:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock dsa_scan_accel_module 00:25:11.040 [2024-04-27 00:59:03.653097] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:25:11.040 00:59:03 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:11.040 00:59:03 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:19.160 00:59:10 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.160 00:59:10 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:19.160 nvme0n1 00:25:19.160 00:59:10 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:19.160 00:59:10 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:19.160 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:19.160 Zero copy mechanism will not be used. 00:25:19.160 Running I/O for 2 seconds... 00:25:20.535 00:25:20.535 Latency(us) 00:25:20.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.536 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:20.536 nvme0n1 : 2.00 7504.95 938.12 0.00 0.00 2127.56 1371.08 5311.87 00:25:20.536 =================================================================================================================== 00:25:20.536 Total : 7504.95 938.12 0.00 0.00 2127.56 1371.08 5311.87 00:25:20.536 0 00:25:20.536 00:59:13 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:20.536 00:59:13 -- host/digest.sh@93 -- # get_accel_stats 00:25:20.536 00:59:13 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:20.536 00:59:13 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:20.536 00:59:13 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:20.536 | select(.opcode=="crc32c") 00:25:20.536 | "\(.module_name) \(.executed)"' 00:25:20.536 00:59:13 -- host/digest.sh@94 -- # true 00:25:20.536 00:59:13 -- host/digest.sh@94 -- # exp_module=dsa 00:25:20.536 00:59:13 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:20.536 00:59:13 -- host/digest.sh@96 -- # [[ dsa == \d\s\a ]] 00:25:20.536 00:59:13 -- host/digest.sh@98 -- # killprocess 2895942 00:25:20.536 00:59:13 -- common/autotest_common.sh@936 -- # '[' -z 2895942 ']' 00:25:20.536 00:59:13 -- common/autotest_common.sh@940 -- # kill -0 2895942 00:25:20.536 00:59:13 -- common/autotest_common.sh@941 -- # uname 00:25:20.536 00:59:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.536 00:59:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2895942 00:25:20.794 00:59:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:20.794 00:59:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:20.794 00:59:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2895942' 00:25:20.794 killing process with pid 2895942 00:25:20.794 00:59:13 -- common/autotest_common.sh@955 -- # kill 2895942 00:25:20.794 Received shutdown signal, test time was about 2.000000 seconds 00:25:20.794 00:25:20.794 Latency(us) 00:25:20.794 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.794 =================================================================================================================== 00:25:20.794 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.794 00:59:13 -- common/autotest_common.sh@960 -- # wait 2895942 00:25:23.327 00:59:15 -- host/digest.sh@132 -- # killprocess 2888456 00:25:23.327 00:59:15 -- common/autotest_common.sh@936 -- # '[' -z 2888456 ']' 00:25:23.327 00:59:15 -- common/autotest_common.sh@940 -- # kill -0 2888456 00:25:23.327 00:59:15 -- common/autotest_common.sh@941 -- # uname 00:25:23.327 00:59:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:23.327 00:59:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2888456 00:25:23.327 00:59:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:23.327 00:59:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:23.327 00:59:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2888456' 00:25:23.327 killing process with pid 2888456 00:25:23.327 00:59:15 -- common/autotest_common.sh@955 -- # kill 2888456 00:25:23.327 00:59:15 -- common/autotest_common.sh@960 -- # wait 2888456 00:25:23.585 00:25:23.585 real 0m52.753s 00:25:23.585 user 1m12.945s 00:25:23.585 sys 0m4.227s 00:25:23.585 00:59:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:23.585 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.585 ************************************ 00:25:23.585 END TEST nvmf_digest_dsa_initiator 00:25:23.585 ************************************ 00:25:23.585 00:59:16 -- host/digest.sh@143 -- # run_test nvmf_digest_dsa_target run_digest dsa_target 00:25:23.585 00:59:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:23.585 00:59:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:23.585 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.585 ************************************ 00:25:23.585 START TEST nvmf_digest_dsa_target 00:25:23.585 ************************************ 00:25:23.585 00:59:16 -- common/autotest_common.sh@1111 -- # run_digest dsa_target 00:25:23.585 00:59:16 -- host/digest.sh@120 -- # local dsa_initiator 00:25:23.585 00:59:16 -- host/digest.sh@121 -- # [[ dsa_target == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:25:23.585 00:59:16 -- host/digest.sh@121 -- # dsa_initiator=false 00:25:23.585 00:59:16 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:25:23.585 00:59:16 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:25:23.585 00:59:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:23.585 00:59:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:23.585 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.585 00:59:16 -- nvmf/common.sh@470 -- # nvmfpid=2898663 00:25:23.585 00:59:16 -- nvmf/common.sh@471 -- # waitforlisten 2898663 00:25:23.585 00:59:16 -- common/autotest_common.sh@817 -- # '[' -z 2898663 ']' 00:25:23.585 00:59:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.585 00:59:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:23.585 00:59:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.585 00:59:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:23.585 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:23.585 00:59:16 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:23.585 [2024-04-27 00:59:16.248514] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:23.585 [2024-04-27 00:59:16.248609] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.843 EAL: No free 2048 kB hugepages reported on node 1 00:25:23.843 [2024-04-27 00:59:16.343556] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:23.843 [2024-04-27 00:59:16.434713] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.843 [2024-04-27 00:59:16.434747] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.843 [2024-04-27 00:59:16.434758] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.843 [2024-04-27 00:59:16.434767] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.843 [2024-04-27 00:59:16.434774] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.843 [2024-04-27 00:59:16.434805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:24.410 00:59:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:24.410 00:59:16 -- common/autotest_common.sh@850 -- # return 0 00:25:24.410 00:59:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:24.410 00:59:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:24.410 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 00:59:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.410 00:59:16 -- host/digest.sh@125 -- # [[ dsa_target == \d\s\a\_\t\a\r\g\e\t ]] 00:25:24.410 00:59:16 -- host/digest.sh@125 -- # rpc_cmd dsa_scan_accel_module 00:25:24.410 00:59:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.410 00:59:16 -- common/autotest_common.sh@10 -- # set +x 00:25:24.410 [2024-04-27 00:59:17.003269] accel_dsa_rpc.c: 50:rpc_dsa_scan_accel_module: *NOTICE*: Enabled DSA user-mode 00:25:24.410 00:59:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:24.410 00:59:17 -- host/digest.sh@126 -- # common_target_config 00:25:24.410 00:59:17 -- host/digest.sh@43 -- # rpc_cmd 00:25:24.410 00:59:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:24.410 00:59:17 -- common/autotest_common.sh@10 -- # set +x 00:25:32.535 null0 00:25:32.535 [2024-04-27 00:59:23.849308] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:32.535 [2024-04-27 00:59:23.875651] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.535 00:59:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:32.535 00:59:23 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:25:32.535 00:59:23 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:32.535 00:59:23 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:32.535 00:59:23 -- host/digest.sh@80 -- # rw=randread 00:25:32.535 00:59:23 -- host/digest.sh@80 -- # bs=4096 00:25:32.535 00:59:23 -- host/digest.sh@80 -- # qd=128 00:25:32.535 00:59:23 -- host/digest.sh@80 -- # scan_dsa=false 00:25:32.535 00:59:23 -- host/digest.sh@83 -- # bperfpid=2900159 00:25:32.535 00:59:23 -- host/digest.sh@84 -- # waitforlisten 2900159 /var/tmp/bperf.sock 00:25:32.535 00:59:23 -- common/autotest_common.sh@817 -- # '[' -z 2900159 ']' 00:25:32.535 00:59:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:32.535 00:59:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:32.535 00:59:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:32.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:32.535 00:59:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:32.535 00:59:23 -- common/autotest_common.sh@10 -- # set +x 00:25:32.535 00:59:23 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:32.535 [2024-04-27 00:59:23.955725] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:32.535 [2024-04-27 00:59:23.955835] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900159 ] 00:25:32.535 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.535 [2024-04-27 00:59:24.091379] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.535 [2024-04-27 00:59:24.228618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:32.535 00:59:24 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:32.535 00:59:24 -- common/autotest_common.sh@850 -- # return 0 00:25:32.535 00:59:24 -- host/digest.sh@86 -- # false 00:25:32.535 00:59:24 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:32.535 00:59:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:32.535 00:59:24 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.535 00:59:24 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:32.796 nvme0n1 00:25:32.796 00:59:25 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:32.796 00:59:25 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:32.796 Running I/O for 2 seconds... 00:25:35.329 00:25:35.329 Latency(us) 00:25:35.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.329 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:35.329 nvme0n1 : 2.04 21768.36 85.03 0.00 0.00 5758.04 2466.22 47461.86 00:25:35.329 =================================================================================================================== 00:25:35.329 Total : 21768.36 85.03 0.00 0.00 5758.04 2466.22 47461.86 00:25:35.329 0 00:25:35.329 00:59:27 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:35.329 00:59:27 -- host/digest.sh@93 -- # get_accel_stats 00:25:35.329 00:59:27 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:35.329 00:59:27 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:35.329 00:59:27 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:35.329 | select(.opcode=="crc32c") 00:25:35.329 | "\(.module_name) \(.executed)"' 00:25:35.329 00:59:27 -- host/digest.sh@94 -- # false 00:25:35.329 00:59:27 -- host/digest.sh@94 -- # exp_module=software 00:25:35.329 00:59:27 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:35.329 00:59:27 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:35.329 00:59:27 -- host/digest.sh@98 -- # killprocess 2900159 00:25:35.329 00:59:27 -- common/autotest_common.sh@936 -- # '[' -z 2900159 ']' 00:25:35.329 00:59:27 -- common/autotest_common.sh@940 -- # kill -0 2900159 00:25:35.329 00:59:27 -- common/autotest_common.sh@941 -- # uname 00:25:35.329 00:59:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:35.329 00:59:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2900159 00:25:35.329 00:59:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:35.329 00:59:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:35.329 00:59:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2900159' 00:25:35.329 killing process with pid 2900159 00:25:35.329 00:59:27 -- common/autotest_common.sh@955 -- # kill 2900159 00:25:35.329 Received shutdown signal, test time was about 2.000000 seconds 00:25:35.329 00:25:35.329 Latency(us) 00:25:35.329 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:35.329 =================================================================================================================== 00:25:35.329 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:35.329 00:59:27 -- common/autotest_common.sh@960 -- # wait 2900159 00:25:35.329 00:59:27 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:25:35.329 00:59:27 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:35.329 00:59:27 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:35.329 00:59:27 -- host/digest.sh@80 -- # rw=randread 00:25:35.329 00:59:27 -- host/digest.sh@80 -- # bs=131072 00:25:35.329 00:59:27 -- host/digest.sh@80 -- # qd=16 00:25:35.329 00:59:28 -- host/digest.sh@80 -- # scan_dsa=false 00:25:35.329 00:59:28 -- host/digest.sh@83 -- # bperfpid=2900944 00:25:35.329 00:59:28 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:35.329 00:59:28 -- host/digest.sh@84 -- # waitforlisten 2900944 /var/tmp/bperf.sock 00:25:35.329 00:59:28 -- common/autotest_common.sh@817 -- # '[' -z 2900944 ']' 00:25:35.329 00:59:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:35.329 00:59:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:35.329 00:59:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:35.329 00:59:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:35.329 00:59:28 -- common/autotest_common.sh@10 -- # set +x 00:25:35.587 [2024-04-27 00:59:28.046197] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:35.587 [2024-04-27 00:59:28.046297] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900944 ] 00:25:35.587 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:35.587 Zero copy mechanism will not be used. 00:25:35.587 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.587 [2024-04-27 00:59:28.130672] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:35.587 [2024-04-27 00:59:28.220515] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.152 00:59:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:36.152 00:59:28 -- common/autotest_common.sh@850 -- # return 0 00:25:36.152 00:59:28 -- host/digest.sh@86 -- # false 00:25:36.152 00:59:28 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:36.152 00:59:28 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:36.411 00:59:29 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.411 00:59:29 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:36.980 nvme0n1 00:25:36.980 00:59:29 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:36.980 00:59:29 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:36.980 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:36.980 Zero copy mechanism will not be used. 00:25:36.980 Running I/O for 2 seconds... 00:25:38.885 00:25:38.885 Latency(us) 00:25:38.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:38.885 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:38.885 nvme0n1 : 2.00 7428.01 928.50 0.00 0.00 2151.53 383.73 4121.87 00:25:38.885 =================================================================================================================== 00:25:38.885 Total : 7428.01 928.50 0.00 0.00 2151.53 383.73 4121.87 00:25:38.885 0 00:25:38.885 00:59:31 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:38.885 00:59:31 -- host/digest.sh@93 -- # get_accel_stats 00:25:38.885 00:59:31 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:38.885 00:59:31 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:38.885 | select(.opcode=="crc32c") 00:25:38.885 | "\(.module_name) \(.executed)"' 00:25:38.885 00:59:31 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:39.145 00:59:31 -- host/digest.sh@94 -- # false 00:25:39.145 00:59:31 -- host/digest.sh@94 -- # exp_module=software 00:25:39.145 00:59:31 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:39.145 00:59:31 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:39.145 00:59:31 -- host/digest.sh@98 -- # killprocess 2900944 00:25:39.145 00:59:31 -- common/autotest_common.sh@936 -- # '[' -z 2900944 ']' 00:25:39.145 00:59:31 -- common/autotest_common.sh@940 -- # kill -0 2900944 00:25:39.145 00:59:31 -- common/autotest_common.sh@941 -- # uname 00:25:39.145 00:59:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:39.145 00:59:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2900944 00:25:39.145 00:59:31 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:39.145 00:59:31 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:39.145 00:59:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2900944' 00:25:39.145 killing process with pid 2900944 00:25:39.145 00:59:31 -- common/autotest_common.sh@955 -- # kill 2900944 00:25:39.145 Received shutdown signal, test time was about 2.000000 seconds 00:25:39.145 00:25:39.145 Latency(us) 00:25:39.145 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:39.145 =================================================================================================================== 00:25:39.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:39.145 00:59:31 -- common/autotest_common.sh@960 -- # wait 2900944 00:25:39.405 00:59:32 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:25:39.405 00:59:32 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:39.405 00:59:32 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:39.405 00:59:32 -- host/digest.sh@80 -- # rw=randwrite 00:25:39.405 00:59:32 -- host/digest.sh@80 -- # bs=4096 00:25:39.405 00:59:32 -- host/digest.sh@80 -- # qd=128 00:25:39.405 00:59:32 -- host/digest.sh@80 -- # scan_dsa=false 00:25:39.405 00:59:32 -- host/digest.sh@83 -- # bperfpid=2901676 00:25:39.405 00:59:32 -- host/digest.sh@84 -- # waitforlisten 2901676 /var/tmp/bperf.sock 00:25:39.405 00:59:32 -- common/autotest_common.sh@817 -- # '[' -z 2901676 ']' 00:25:39.405 00:59:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:39.405 00:59:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:39.405 00:59:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:39.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:39.405 00:59:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:39.405 00:59:32 -- common/autotest_common.sh@10 -- # set +x 00:25:39.405 00:59:32 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:25:39.666 [2024-04-27 00:59:32.137798] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:39.666 [2024-04-27 00:59:32.137944] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2901676 ] 00:25:39.666 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.666 [2024-04-27 00:59:32.268892] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:39.926 [2024-04-27 00:59:32.365965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:40.185 00:59:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:40.185 00:59:32 -- common/autotest_common.sh@850 -- # return 0 00:25:40.185 00:59:32 -- host/digest.sh@86 -- # false 00:25:40.185 00:59:32 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:40.185 00:59:32 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:40.442 00:59:33 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:40.442 00:59:33 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:41.009 nvme0n1 00:25:41.009 00:59:33 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:41.009 00:59:33 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:41.009 Running I/O for 2 seconds... 00:25:42.911 00:25:42.912 Latency(us) 00:25:42.912 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:42.912 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:25:42.912 nvme0n1 : 2.01 24922.58 97.35 0.00 0.00 5125.83 2414.48 10278.80 00:25:42.912 =================================================================================================================== 00:25:42.912 Total : 24922.58 97.35 0.00 0.00 5125.83 2414.48 10278.80 00:25:42.912 0 00:25:42.912 00:59:35 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:42.912 00:59:35 -- host/digest.sh@93 -- # get_accel_stats 00:25:42.912 00:59:35 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:42.912 00:59:35 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:42.912 00:59:35 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:42.912 | select(.opcode=="crc32c") 00:25:42.912 | "\(.module_name) \(.executed)"' 00:25:43.172 00:59:35 -- host/digest.sh@94 -- # false 00:25:43.172 00:59:35 -- host/digest.sh@94 -- # exp_module=software 00:25:43.172 00:59:35 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:43.172 00:59:35 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:43.172 00:59:35 -- host/digest.sh@98 -- # killprocess 2901676 00:25:43.172 00:59:35 -- common/autotest_common.sh@936 -- # '[' -z 2901676 ']' 00:25:43.172 00:59:35 -- common/autotest_common.sh@940 -- # kill -0 2901676 00:25:43.172 00:59:35 -- common/autotest_common.sh@941 -- # uname 00:25:43.172 00:59:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.172 00:59:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2901676 00:25:43.172 00:59:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:43.172 00:59:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:43.172 00:59:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2901676' 00:25:43.172 killing process with pid 2901676 00:25:43.172 00:59:35 -- common/autotest_common.sh@955 -- # kill 2901676 00:25:43.172 Received shutdown signal, test time was about 2.000000 seconds 00:25:43.172 00:25:43.172 Latency(us) 00:25:43.172 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.172 =================================================================================================================== 00:25:43.172 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.172 00:59:35 -- common/autotest_common.sh@960 -- # wait 2901676 00:25:43.806 00:59:36 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:25:43.806 00:59:36 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:25:43.806 00:59:36 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:25:43.806 00:59:36 -- host/digest.sh@80 -- # rw=randwrite 00:25:43.806 00:59:36 -- host/digest.sh@80 -- # bs=131072 00:25:43.806 00:59:36 -- host/digest.sh@80 -- # qd=16 00:25:43.806 00:59:36 -- host/digest.sh@80 -- # scan_dsa=false 00:25:43.806 00:59:36 -- host/digest.sh@83 -- # bperfpid=2902561 00:25:43.806 00:59:36 -- host/digest.sh@84 -- # waitforlisten 2902561 /var/tmp/bperf.sock 00:25:43.807 00:59:36 -- common/autotest_common.sh@817 -- # '[' -z 2902561 ']' 00:25:43.807 00:59:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:43.807 00:59:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:43.807 00:59:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:43.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:43.807 00:59:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:43.807 00:59:36 -- common/autotest_common.sh@10 -- # set +x 00:25:43.807 00:59:36 -- host/digest.sh@82 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:25:43.807 [2024-04-27 00:59:36.239362] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:43.807 [2024-04-27 00:59:36.239490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902561 ] 00:25:43.807 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:43.807 Zero copy mechanism will not be used. 00:25:43.807 EAL: No free 2048 kB hugepages reported on node 1 00:25:43.807 [2024-04-27 00:59:36.356085] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:43.807 [2024-04-27 00:59:36.446647] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:44.373 00:59:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:44.373 00:59:36 -- common/autotest_common.sh@850 -- # return 0 00:25:44.373 00:59:36 -- host/digest.sh@86 -- # false 00:25:44.373 00:59:36 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:25:44.373 00:59:36 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:25:44.631 00:59:37 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.631 00:59:37 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:44.890 nvme0n1 00:25:44.890 00:59:37 -- host/digest.sh@92 -- # bperf_py perform_tests 00:25:44.890 00:59:37 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:45.151 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:45.151 Zero copy mechanism will not be used. 00:25:45.151 Running I/O for 2 seconds... 00:25:47.055 00:25:47.055 Latency(us) 00:25:47.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.055 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:25:47.055 nvme0n1 : 2.00 7549.78 943.72 0.00 0.00 2116.15 1034.78 5518.82 00:25:47.055 =================================================================================================================== 00:25:47.055 Total : 7549.78 943.72 0.00 0.00 2116.15 1034.78 5518.82 00:25:47.055 0 00:25:47.055 00:59:39 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:25:47.055 00:59:39 -- host/digest.sh@93 -- # get_accel_stats 00:25:47.055 00:59:39 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:25:47.055 00:59:39 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:25:47.055 | select(.opcode=="crc32c") 00:25:47.055 | "\(.module_name) \(.executed)"' 00:25:47.055 00:59:39 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:25:47.316 00:59:39 -- host/digest.sh@94 -- # false 00:25:47.316 00:59:39 -- host/digest.sh@94 -- # exp_module=software 00:25:47.316 00:59:39 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:25:47.316 00:59:39 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:25:47.316 00:59:39 -- host/digest.sh@98 -- # killprocess 2902561 00:25:47.316 00:59:39 -- common/autotest_common.sh@936 -- # '[' -z 2902561 ']' 00:25:47.316 00:59:39 -- common/autotest_common.sh@940 -- # kill -0 2902561 00:25:47.316 00:59:39 -- common/autotest_common.sh@941 -- # uname 00:25:47.316 00:59:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:47.316 00:59:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2902561 00:25:47.316 00:59:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:47.316 00:59:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:47.316 00:59:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2902561' 00:25:47.316 killing process with pid 2902561 00:25:47.316 00:59:39 -- common/autotest_common.sh@955 -- # kill 2902561 00:25:47.316 Received shutdown signal, test time was about 2.000000 seconds 00:25:47.316 00:25:47.316 Latency(us) 00:25:47.316 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:47.316 =================================================================================================================== 00:25:47.316 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:47.316 00:59:39 -- common/autotest_common.sh@960 -- # wait 2902561 00:25:47.576 00:59:40 -- host/digest.sh@132 -- # killprocess 2898663 00:25:47.576 00:59:40 -- common/autotest_common.sh@936 -- # '[' -z 2898663 ']' 00:25:47.576 00:59:40 -- common/autotest_common.sh@940 -- # kill -0 2898663 00:25:47.576 00:59:40 -- common/autotest_common.sh@941 -- # uname 00:25:47.576 00:59:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:47.576 00:59:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2898663 00:25:47.576 00:59:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:47.576 00:59:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:47.576 00:59:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2898663' 00:25:47.576 killing process with pid 2898663 00:25:47.577 00:59:40 -- common/autotest_common.sh@955 -- # kill 2898663 00:25:47.577 00:59:40 -- common/autotest_common.sh@960 -- # wait 2898663 00:25:50.114 00:25:50.114 real 0m26.441s 00:25:50.114 user 0m34.832s 00:25:50.114 sys 0m3.789s 00:25:50.114 00:59:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:50.114 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.114 ************************************ 00:25:50.114 END TEST nvmf_digest_dsa_target 00:25:50.114 ************************************ 00:25:50.114 00:59:42 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:25:50.114 00:59:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:25:50.114 00:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:50.114 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.114 ************************************ 00:25:50.114 START TEST nvmf_digest_error 00:25:50.114 ************************************ 00:25:50.114 00:59:42 -- common/autotest_common.sh@1111 -- # run_digest_error 00:25:50.114 00:59:42 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:25:50.114 00:59:42 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:50.114 00:59:42 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:50.114 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.114 00:59:42 -- nvmf/common.sh@470 -- # nvmfpid=2903822 00:25:50.114 00:59:42 -- nvmf/common.sh@471 -- # waitforlisten 2903822 00:25:50.114 00:59:42 -- common/autotest_common.sh@817 -- # '[' -z 2903822 ']' 00:25:50.114 00:59:42 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:25:50.114 00:59:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:50.114 00:59:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:50.114 00:59:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:50.114 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:50.114 00:59:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:50.114 00:59:42 -- common/autotest_common.sh@10 -- # set +x 00:25:50.375 [2024-04-27 00:59:42.839005] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:50.375 [2024-04-27 00:59:42.839127] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:50.375 EAL: No free 2048 kB hugepages reported on node 1 00:25:50.375 [2024-04-27 00:59:42.966384] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:50.375 [2024-04-27 00:59:43.063119] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:50.375 [2024-04-27 00:59:43.063167] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:50.375 [2024-04-27 00:59:43.063178] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:50.375 [2024-04-27 00:59:43.063194] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:50.375 [2024-04-27 00:59:43.063202] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:50.375 [2024-04-27 00:59:43.063256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.946 00:59:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:50.946 00:59:43 -- common/autotest_common.sh@850 -- # return 0 00:25:50.946 00:59:43 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:50.946 00:59:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:50.946 00:59:43 -- common/autotest_common.sh@10 -- # set +x 00:25:50.946 00:59:43 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.946 00:59:43 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:25:50.946 00:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.946 00:59:43 -- common/autotest_common.sh@10 -- # set +x 00:25:50.946 [2024-04-27 00:59:43.591829] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:25:50.946 00:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:50.946 00:59:43 -- host/digest.sh@105 -- # common_target_config 00:25:50.946 00:59:43 -- host/digest.sh@43 -- # rpc_cmd 00:25:50.946 00:59:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.946 00:59:43 -- common/autotest_common.sh@10 -- # set +x 00:25:51.204 null0 00:25:51.204 [2024-04-27 00:59:43.757424] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:51.204 [2024-04-27 00:59:43.781609] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:51.204 00:59:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:51.204 00:59:43 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:25:51.204 00:59:43 -- host/digest.sh@54 -- # local rw bs qd 00:25:51.204 00:59:43 -- host/digest.sh@56 -- # rw=randread 00:25:51.204 00:59:43 -- host/digest.sh@56 -- # bs=4096 00:25:51.204 00:59:43 -- host/digest.sh@56 -- # qd=128 00:25:51.204 00:59:43 -- host/digest.sh@58 -- # bperfpid=2903991 00:25:51.204 00:59:43 -- host/digest.sh@60 -- # waitforlisten 2903991 /var/tmp/bperf.sock 00:25:51.204 00:59:43 -- common/autotest_common.sh@817 -- # '[' -z 2903991 ']' 00:25:51.204 00:59:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:51.204 00:59:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:51.204 00:59:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:51.204 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:51.204 00:59:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:51.204 00:59:43 -- common/autotest_common.sh@10 -- # set +x 00:25:51.204 00:59:43 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:25:51.204 [2024-04-27 00:59:43.861731] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:51.204 [2024-04-27 00:59:43.861841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2903991 ] 00:25:51.463 EAL: No free 2048 kB hugepages reported on node 1 00:25:51.463 [2024-04-27 00:59:43.979592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.463 [2024-04-27 00:59:44.070609] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:52.031 00:59:44 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:52.031 00:59:44 -- common/autotest_common.sh@850 -- # return 0 00:25:52.032 00:59:44 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.032 00:59:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:52.290 00:59:44 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:52.290 00:59:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.290 00:59:44 -- common/autotest_common.sh@10 -- # set +x 00:25:52.290 00:59:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.290 00:59:44 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.290 00:59:44 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:52.548 nvme0n1 00:25:52.548 00:59:45 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:25:52.548 00:59:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:52.548 00:59:45 -- common/autotest_common.sh@10 -- # set +x 00:25:52.548 00:59:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:52.548 00:59:45 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:52.548 00:59:45 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:52.548 Running I/O for 2 seconds... 00:25:52.548 [2024-04-27 00:59:45.192190] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.548 [2024-04-27 00:59:45.192250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.548 [2024-04-27 00:59:45.192265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.548 [2024-04-27 00:59:45.205166] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.548 [2024-04-27 00:59:45.205202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:1448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.548 [2024-04-27 00:59:45.205215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.548 [2024-04-27 00:59:45.217820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.548 [2024-04-27 00:59:45.217854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.548 [2024-04-27 00:59:45.217866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.548 [2024-04-27 00:59:45.226721] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.548 [2024-04-27 00:59:45.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:9788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.548 [2024-04-27 00:59:45.226769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.548 [2024-04-27 00:59:45.238511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.548 [2024-04-27 00:59:45.238546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.548 [2024-04-27 00:59:45.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.250081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.250115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.250126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.259195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.259233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:15359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.259245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.271620] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.271651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:11524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.271661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.284667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.284701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23791 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.284713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.293374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.293406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:8934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.293417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.305697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.305734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:20430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.305750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.317241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.317272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:7397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.317282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.326081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.326113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:17532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.326125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.338793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.338832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:21987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.338843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.351616] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.351657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.351672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.363830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.363862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:10085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.363874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.376539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.807 [2024-04-27 00:59:45.376572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:2070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.807 [2024-04-27 00:59:45.376582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.807 [2024-04-27 00:59:45.385832] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.385866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:4428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.385877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.397621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.397655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:8619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.397667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.410863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.410895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.410906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.423781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.423813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.423824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.436649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.436681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.436692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.449756] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.449790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.449802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.458524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.458557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:10376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.458569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.470376] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.470407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15210 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.470418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.483464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.483500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.483511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:52.808 [2024-04-27 00:59:45.495206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:52.808 [2024-04-27 00:59:45.495244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6945 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:52.808 [2024-04-27 00:59:45.495256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.504793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.504826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:8806 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.504837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.516452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.516486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.516501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.529171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.529205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.529217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.538681] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.538717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:10355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.538728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.548183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.548224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:15706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.548236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.557691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.557723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:10808 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.557734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.567192] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.567227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:5954 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.567238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.576705] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.066 [2024-04-27 00:59:45.576742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.066 [2024-04-27 00:59:45.576752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.066 [2024-04-27 00:59:45.586243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.586274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:12056 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.586284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.594870] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.594901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.594911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.604549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.604578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.604588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.617614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.617643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.617653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.630580] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.630615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12742 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.630627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.638919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.638950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:19147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.638961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.651261] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.651295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.651306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.659868] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.659899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.659910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.672912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.672955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.685489] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.685521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:11120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.685532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.698336] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.698369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.698380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.709853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.709886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.709905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.719349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.719380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.719391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.732781] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.732813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.732823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.741205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.741240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.741252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.067 [2024-04-27 00:59:45.753938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.067 [2024-04-27 00:59:45.753970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:25227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.067 [2024-04-27 00:59:45.753981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.766356] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.766392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.766410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.778876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.778920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9033 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.778931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.787980] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.788017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.788032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.800888] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.800922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:14339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.800934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.809036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.809066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.809077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.820411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.820445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:23920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.820456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.832329] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.832361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.832372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.840989] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.841021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.841032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.852763] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.852800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.852815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.860878] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.860910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.860923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.871815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.871846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.871856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.885459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.885494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:1963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.885507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.893789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.893820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:19675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.893831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.905827] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.905859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.905870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.918619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.918654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.918665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.929913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.929949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:13822 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.929961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.939244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.939275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:18162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.939286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.951437] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.951469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:5939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.951484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.959673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.959704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.959716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.971910] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.971940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.971951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.983384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.983414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:9958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.983424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:45.992156] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:45.992188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:45.992199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:46.001157] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:46.001189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:22783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:46.001200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.326 [2024-04-27 00:59:46.013213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.326 [2024-04-27 00:59:46.013248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.326 [2024-04-27 00:59:46.013260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.586 [2024-04-27 00:59:46.023867] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.023903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:6347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.023914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.033244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.033276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18395 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.033287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.043054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.043086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:25356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.043097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.055033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.055066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:13174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.055077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.065045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.065077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:10535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.065087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.073452] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.073481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.073491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.084464] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.084499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:18652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.084511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.093711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.093743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:9202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.093754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.106064] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.106096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.106107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.117819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.117853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.117865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.126919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.126949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.126965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.137979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.138011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.138022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.147581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.147611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.147622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.158260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.158295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.158306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.169916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.169947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.169958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.179088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.179118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.179129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.188886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.188916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.188928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.198447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.198477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13965 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.198487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.207912] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.207944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:22620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.207956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.217513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.217544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.217555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.226512] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.226544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14588 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.226555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.237964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.237995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.238007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.246941] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.246972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.246983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.258585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.258618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:24776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.258637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.267265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.267293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:13517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.587 [2024-04-27 00:59:46.267304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.587 [2024-04-27 00:59:46.279665] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.587 [2024-04-27 00:59:46.279699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.588 [2024-04-27 00:59:46.279711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.292625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.292667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.292679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.300569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.300599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:2702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.300615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.312865] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.312896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.324208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.324251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:8390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.324266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.333610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.333644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:8034 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.333656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.345328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.345359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.848 [2024-04-27 00:59:46.345370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.848 [2024-04-27 00:59:46.354929] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.848 [2024-04-27 00:59:46.354960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:5213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.354971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.364394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.364426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.364436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.373890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.373919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.373930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.383334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.383368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.383380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.392549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.392582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:13239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.392593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.402019] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.402049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:19366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.402060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.412899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.412939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:14363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.412954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.421074] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.421104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15584 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.421115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.432803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.432838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13289 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.432851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.444895] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.444928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.444939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.454213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.454252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.454267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.466801] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.466837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.466848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.479574] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.479605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:24537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.479620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.491330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.491366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.491381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.500362] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.500395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.500407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.512203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.512246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:16675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.512257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.524667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.524704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.524714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.533798] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.533838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:8394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.533851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:53.849 [2024-04-27 00:59:46.544106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:53.849 [2024-04-27 00:59:46.544136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:20334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:53.849 [2024-04-27 00:59:46.544147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.554400] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.554434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:6161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.554447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.563203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.563240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:14522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.563252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.574809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.574852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.574863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.587918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.587951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.587964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.596171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.596203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.596214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.608411] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.608441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16294 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.608451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.621713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.621742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:24035 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.621752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.630560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.630595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.630606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.642839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.642871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:13879 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.642881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.656198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.656246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.656261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.666521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.666551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.666566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.678880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.678914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.678925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.690833] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.690865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.690876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.699361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.699393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.699403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.711645] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.711682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.711694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.721305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.721346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.721357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.733717] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.733750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5955 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.733760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.745744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.745784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.745795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.755209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.755253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21636 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.110 [2024-04-27 00:59:46.755265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.110 [2024-04-27 00:59:46.767513] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.110 [2024-04-27 00:59:46.767550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.111 [2024-04-27 00:59:46.767561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.111 [2024-04-27 00:59:46.780591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.111 [2024-04-27 00:59:46.780623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.111 [2024-04-27 00:59:46.780634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.111 [2024-04-27 00:59:46.793454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.111 [2024-04-27 00:59:46.793489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:13154 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.111 [2024-04-27 00:59:46.793499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.111 [2024-04-27 00:59:46.803435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.111 [2024-04-27 00:59:46.803469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.111 [2024-04-27 00:59:46.803481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.815618] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.815651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:5099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.815662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.824096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.824128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:25025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.824139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.835416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.835448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.835458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.846712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.846743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:15119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.846753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.855560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.855603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.855619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.867463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.867494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.867505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.876456] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.876486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.370 [2024-04-27 00:59:46.876497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.370 [2024-04-27 00:59:46.887474] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.370 [2024-04-27 00:59:46.887505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.887515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.899987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.900028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17136 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.900045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.909374] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.909406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.909417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.921508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.921539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:17176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.921550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.931602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.931634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:14803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.931646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.940608] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.940638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.940648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.952506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.952549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:1138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.952565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.961711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.961742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4734 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.961753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.973667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.973702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.973714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.985231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.985262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.985273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:46.994077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:46.994115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:3141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:46.994128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.005387] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.005419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:22146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.005431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.018477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.018511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.018523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.031510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.031542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.031553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.044556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.044590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:7331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.044607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.056737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.056777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:1469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.056793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.371 [2024-04-27 00:59:47.065890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.371 [2024-04-27 00:59:47.065929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.371 [2024-04-27 00:59:47.065942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.079011] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.079045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:16842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.079057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.092200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.092240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:10340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.092251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.104205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.104251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.104266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.112521] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.112554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.112566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.126821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.126859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.126872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.140591] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.140628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.140641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.150384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.150421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:22618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.150432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.161467] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.161502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.630 [2024-04-27 00:59:47.161513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.630 [2024-04-27 00:59:47.169909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.630 [2024-04-27 00:59:47.169941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-04-27 00:59:47.169953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 [2024-04-27 00:59:47.179928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:54.631 [2024-04-27 00:59:47.179959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:54.631 [2024-04-27 00:59:47.179969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:54.631 00:25:54.631 Latency(us) 00:25:54.631 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.631 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:25:54.631 nvme0n1 : 2.05 22811.57 89.11 0.00 0.00 5491.34 2638.69 46358.10 00:25:54.631 =================================================================================================================== 00:25:54.631 Total : 22811.57 89.11 0.00 0.00 5491.34 2638.69 46358.10 00:25:54.631 0 00:25:54.631 00:59:47 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:54.631 00:59:47 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:54.631 00:59:47 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:54.631 00:59:47 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:54.631 | .driver_specific 00:25:54.631 | .nvme_error 00:25:54.631 | .status_code 00:25:54.631 | .command_transient_transport_error' 00:25:54.889 00:59:47 -- host/digest.sh@71 -- # (( 183 > 0 )) 00:25:54.889 00:59:47 -- host/digest.sh@73 -- # killprocess 2903991 00:25:54.889 00:59:47 -- common/autotest_common.sh@936 -- # '[' -z 2903991 ']' 00:25:54.889 00:59:47 -- common/autotest_common.sh@940 -- # kill -0 2903991 00:25:54.890 00:59:47 -- common/autotest_common.sh@941 -- # uname 00:25:54.890 00:59:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.890 00:59:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2903991 00:25:54.890 00:59:47 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:54.890 00:59:47 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:54.890 00:59:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2903991' 00:25:54.890 killing process with pid 2903991 00:25:54.890 00:59:47 -- common/autotest_common.sh@955 -- # kill 2903991 00:25:54.890 Received shutdown signal, test time was about 2.000000 seconds 00:25:54.890 00:25:54.890 Latency(us) 00:25:54.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:54.890 =================================================================================================================== 00:25:54.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:54.890 00:59:47 -- common/autotest_common.sh@960 -- # wait 2903991 00:25:55.149 00:59:47 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:25:55.149 00:59:47 -- host/digest.sh@54 -- # local rw bs qd 00:25:55.149 00:59:47 -- host/digest.sh@56 -- # rw=randread 00:25:55.149 00:59:47 -- host/digest.sh@56 -- # bs=131072 00:25:55.149 00:59:47 -- host/digest.sh@56 -- # qd=16 00:25:55.149 00:59:47 -- host/digest.sh@58 -- # bperfpid=2904754 00:25:55.149 00:59:47 -- host/digest.sh@60 -- # waitforlisten 2904754 /var/tmp/bperf.sock 00:25:55.149 00:59:47 -- common/autotest_common.sh@817 -- # '[' -z 2904754 ']' 00:25:55.149 00:59:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:55.149 00:59:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.149 00:59:47 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:25:55.149 00:59:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:55.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:55.149 00:59:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.149 00:59:47 -- common/autotest_common.sh@10 -- # set +x 00:25:55.409 [2024-04-27 00:59:47.855468] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:55.409 [2024-04-27 00:59:47.855587] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2904754 ] 00:25:55.409 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:55.409 Zero copy mechanism will not be used. 00:25:55.409 EAL: No free 2048 kB hugepages reported on node 1 00:25:55.409 [2024-04-27 00:59:47.967035] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.409 [2024-04-27 00:59:48.055904] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.978 00:59:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.978 00:59:48 -- common/autotest_common.sh@850 -- # return 0 00:25:55.978 00:59:48 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:55.978 00:59:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:25:56.239 00:59:48 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:25:56.239 00:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.239 00:59:48 -- common/autotest_common.sh@10 -- # set +x 00:25:56.239 00:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.239 00:59:48 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.239 00:59:48 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:25:56.239 nvme0n1 00:25:56.499 00:59:48 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:25:56.499 00:59:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:56.499 00:59:48 -- common/autotest_common.sh@10 -- # set +x 00:25:56.499 00:59:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:56.499 00:59:48 -- host/digest.sh@69 -- # bperf_py perform_tests 00:25:56.499 00:59:48 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:25:56.499 I/O size of 131072 is greater than zero copy threshold (65536). 00:25:56.499 Zero copy mechanism will not be used. 00:25:56.499 Running I/O for 2 seconds... 00:25:56.499 [2024-04-27 00:59:49.035480] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.035534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.035550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.040229] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.040264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.040277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.044788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.044817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.044828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.049348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.049377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.049388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.053863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.053889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.053899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.058386] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.058411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.058421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.062918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.062943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.062953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.067507] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.067530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.067540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.071839] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.499 [2024-04-27 00:59:49.071864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.499 [2024-04-27 00:59:49.071874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.499 [2024-04-27 00:59:49.076286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.076316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.076327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.080823] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.080847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.085321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.085343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.085353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.089909] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.089934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.089944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.094546] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.094582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.094592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.099061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.099088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.099100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.104102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.104127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.104137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.109490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.109517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.109528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.116390] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.116416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.116426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.122080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.122106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.122116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.127604] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.127631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.127641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.132663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.132689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.132699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.138553] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.138578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.138588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.145451] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.145476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.145486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.150949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.150975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.150985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.156435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.156459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.156469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.160479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.160504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.160514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.165101] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.165125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.165140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.169667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.169693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.169704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.174174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.174199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.174209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.178667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.178690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.178700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.183272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.183297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.183307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.188200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.188228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.188239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.500 [2024-04-27 00:59:49.193250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.500 [2024-04-27 00:59:49.193274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.500 [2024-04-27 00:59:49.193284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.198529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.198554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.198565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.204026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.204052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.204062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.209674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.209700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.209711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.212584] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.212608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.212618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.217712] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.217737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.217747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.222864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.222889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.222900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.227277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.227303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.227313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.232516] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.232541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.232551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.238225] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.238248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.238258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.244368] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.244392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.244402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.249964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.249988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.250002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.254092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.254121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.254132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.259047] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.259074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.259084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.760 [2024-04-27 00:59:49.264224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.760 [2024-04-27 00:59:49.264249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.760 [2024-04-27 00:59:49.264259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.268975] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.269002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.269013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.272999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.273026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.273036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.277399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.277425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.277435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.281809] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.281836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.281846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.286353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.286378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.286388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.290935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.290960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.290970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.295349] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.295373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.295383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.297770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.297794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.297804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.302284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.302310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.302320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.306770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.306794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.306805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.311297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.311322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.311332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.315773] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.315797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.315807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.320086] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.320111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.320120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.324493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.324517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.324531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.328898] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.328928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.328939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.333288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.333314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.333324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.337518] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.337544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.337554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.341992] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.342017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.342028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.346499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.346526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.346536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.351026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.351050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.351060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.355422] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.355456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.359935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.359964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.359975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.364277] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.364304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.364315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.368510] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.368536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.368546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.372871] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.372896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.372906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.377357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.377383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.761 [2024-04-27 00:59:49.377393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.761 [2024-04-27 00:59:49.381493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.761 [2024-04-27 00:59:49.381518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.381528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.386050] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.386074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.386084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.390537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.390561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.390571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.394894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.394918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.394928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.399280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.399306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.399320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.403810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.403834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.403845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.408310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.408336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.408346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.412441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.412465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.412475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.416780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.416806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.416816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.421331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.421357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.421367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.425861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.425885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.425895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.430345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.430370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.430380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.434810] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.434835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.434845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.439268] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.439297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.439307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.443677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.443702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.443712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.448547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.448571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.448581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:56.762 [2024-04-27 00:59:49.452945] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:56.762 [2024-04-27 00:59:49.452969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:56.762 [2024-04-27 00:59:49.452979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.457749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.457774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.457784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.462644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.462669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.462679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.468937] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.468968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.468980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.475885] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.475912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.475923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.483433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.483460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.483475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.491272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.491298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.491308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.497862] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.497887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.497897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.503357] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.503381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.503391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.508193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.508224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.508243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.513886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.513913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.513924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.519633] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.519659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.519669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.525154] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.525178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.525189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.530719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.530745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.530755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.535836] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.535864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.535874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.541292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.541322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.541333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.546486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.546511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.022 [2024-04-27 00:59:49.546522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.022 [2024-04-27 00:59:49.551769] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.022 [2024-04-27 00:59:49.551795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.551805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.556951] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.556976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.556987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.562191] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.562216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.562231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.567440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.567466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.567476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.572423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.572450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.572460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.577822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.577847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.577863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.583284] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.583309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.583319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.589158] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.589182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.589193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.595808] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.595832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.595842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.601630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.601656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.601666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.607646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.607671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.607681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.613861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.613886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.613896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.618622] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.618649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.618659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.623077] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.623102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.623112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.627795] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.627827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.627837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.631918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.631943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.631953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.635964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.635991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.636001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.639977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.640002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.640013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.643875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.643900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.643910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.647819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.647844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.647854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.652752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.652778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.656903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.656927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.656937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.659539] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.659564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.664377] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.664403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.664414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.669826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.669852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.669862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.674987] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.675011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.675021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.680137] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.680162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.023 [2024-04-27 00:59:49.680171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.023 [2024-04-27 00:59:49.684291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.023 [2024-04-27 00:59:49.684316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.684326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.688448] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.688477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.688488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.693384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.693410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.693422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.696619] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.696644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.696654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.699949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.699980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.699990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.704659] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.704686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.704696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.709483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.709507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.709517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.024 [2024-04-27 00:59:49.714010] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.024 [2024-04-27 00:59:49.714034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.024 [2024-04-27 00:59:49.714044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.718979] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.719013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-04-27 00:59:49.719023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.725857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.725884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-04-27 00:59:49.725895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.731520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.731547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-04-27 00:59:49.731557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.734861] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.734885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-04-27 00:59:49.734894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.739791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.739814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.283 [2024-04-27 00:59:49.739824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.283 [2024-04-27 00:59:49.743715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.283 [2024-04-27 00:59:49.743738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.743748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.747663] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.747686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.747695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.751641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.751667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.751678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.755706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.755730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.755740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.760506] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.760531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.760541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.764501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.764525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.764535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.768432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.768456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.768467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.773017] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.773041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.773050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.776318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.776345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.776356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.780783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.780807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.780817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.785416] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.785440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.785450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.790168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.790192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.790201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.796522] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.796546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.796556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.802566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.802591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.802600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.807438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.807463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.807473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.812302] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.812327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.812337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.817385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.817409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.824087] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.824111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.824120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.829246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.829272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.829282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.834100] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.834125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.834134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.838949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.838974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.838984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.843689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.843714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.843724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.848458] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.848483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.848493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.853399] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.853425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.853436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.860690] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.860721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.860733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.866256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.866281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.866296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.871273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.871297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.284 [2024-04-27 00:59:49.871307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.284 [2024-04-27 00:59:49.875594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.284 [2024-04-27 00:59:49.875618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.875628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.879589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.879615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.879626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.883664] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.883689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.883700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.887625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.887652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.887661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.892303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.892329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.892339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.898083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.898107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.898117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.904899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.904923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.904933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.909647] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.909674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.909684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.912776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.912799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.912809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.917579] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.917605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.917616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.922054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.922078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.922089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.925848] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.925872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.925882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.929723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.929747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.929757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.933200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.933233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.933244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.938096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.938124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.938134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.942423] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.942448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.942463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.946477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.946502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.946511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.950406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.950430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.950440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.954106] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.954132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.954141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.958169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.958198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.958210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.962155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.962191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.962202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.967066] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.967094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.967105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.971187] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.971213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.971228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.285 [2024-04-27 00:59:49.975218] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.285 [2024-04-27 00:59:49.975247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.285 [2024-04-27 00:59:49.975257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:49.979957] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:49.979984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:49.979995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:49.985783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:49.985807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:49.985817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:49.992412] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:49.992447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:49.992458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:49.997389] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:49.997417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:49.997427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.002589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.002625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.002638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.008247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.008276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.008289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.013259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.013286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.013297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.018159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.018188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.018200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.022278] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.022307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.022324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.026459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.026490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.026508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.030466] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.030495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.030508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.035380] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.035410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.035422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.040155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.040183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.040195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.044384] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.044411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.044423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.048639] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.048667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.048679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.053759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.053788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.053800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.059295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.059324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.059338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.063699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.063726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.546 [2024-04-27 00:59:50.063737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.546 [2024-04-27 00:59:50.067916] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.546 [2024-04-27 00:59:50.067944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.067957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.073155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.073184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.073196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.079701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.079728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.079740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.086447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.086478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.086490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.091954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.092031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.092043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.095685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.095713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.095724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.101331] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.101356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.101368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.105915] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.105939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.105953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.110348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.110374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.110385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.114706] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.114731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.114742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.118882] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.118906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.118916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.122561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.122586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.122604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.126241] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.126265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.126276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.129919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.129943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.129952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.133716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.133741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.133752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.137371] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.137394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.137403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.141286] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.141318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.141329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.145351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.145377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.145387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.149234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.149258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.149268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.153776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.153801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.153812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.159015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.159040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.159051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.164949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.164972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.164983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.171287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.171312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.171322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.176340] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.176365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.176376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.181189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.181215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.181238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.185112] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.185138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.185148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.189519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.547 [2024-04-27 00:59:50.189543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.547 [2024-04-27 00:59:50.189553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.547 [2024-04-27 00:59:50.194673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.194697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.194707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.199776] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.199799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.199809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.204212] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.204241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.204251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.208736] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.208759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.208770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.213353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.213378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.217058] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.217082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.217092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.220599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.220629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.220639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.225176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.225200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.225212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.230176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.230202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.230213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.236638] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.236663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.236672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.548 [2024-04-27 00:59:50.241883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.548 [2024-04-27 00:59:50.241907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.548 [2024-04-27 00:59:50.241917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.246155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.246184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.246196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.250381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.250406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.250417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.254089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.254114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.254124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.257804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.257829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.257843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.261612] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.261637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.261647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.265504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.265529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.265539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.270176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.270200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.270210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.274481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.274506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.274516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.278454] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.278479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.278490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.282375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.282401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.282411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.286282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.286307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.286317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.290231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.290255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.290265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.294205] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.294239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.294249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.298143] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.298168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.298177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.302117] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.302142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.302152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.306081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.306104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.306114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.310020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.310052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.310062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.313926] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.313952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.313962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.317821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.317845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.317855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.321783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.321807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.321817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.326061] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.326086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.326100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.330255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.330284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.330297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.334115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.334143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.334154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.338572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.338598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.338608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.343429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.343455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.811 [2024-04-27 00:59:50.343466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.811 [2024-04-27 00:59:50.348749] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.811 [2024-04-27 00:59:50.348774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.348784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.354306] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.354330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.354341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.358988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.359015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.359025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.363805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.363830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.363841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.368472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.368503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.368513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.373142] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.373168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.373178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.377531] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.377555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.377565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.380925] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.380952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.380962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.384784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.384809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.384819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.388855] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.388881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.388893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.393582] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.393611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.393622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.398850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.398875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.398886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.403041] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.403074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.403090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.406757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.406781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.406792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.410477] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.410502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.410511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.414227] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.414253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.414264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.417908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.417932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.417942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.421564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.421589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.421599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.425317] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.425341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.425351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.427931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.427962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.427973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.431486] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.431512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.431522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.436323] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.436354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.436364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.441691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.441715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.441725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.448292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.448316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.448326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.454653] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.454679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.454689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.461859] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.461883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.461892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.468939] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.468964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.812 [2024-04-27 00:59:50.468974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.812 [2024-04-27 00:59:50.475931] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.812 [2024-04-27 00:59:50.475956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.475966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.482918] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.482942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.482952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.489120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.489147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.489157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.493879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.493904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.493913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.497737] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.497762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.497771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.501704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.501728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.501738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:57.813 [2024-04-27 00:59:50.505648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:57.813 [2024-04-27 00:59:50.505671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:57.813 [2024-04-27 00:59:50.505680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.509585] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.509612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.509623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.513519] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.513546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.513556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.518379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.518403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.518412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.522176] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.522200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.522210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.526257] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.526287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.526296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.531174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.531200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.531209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.535067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.535091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.535100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.539083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.539109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.539118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.543983] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.544009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.544018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.548167] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.548191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.548201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.552547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.552572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.552581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.556381] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.556404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.556414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.560924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.560948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.560958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.564903] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.564928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.564937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.569025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.569053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.569063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.573828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.573855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.573866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.578164] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.578189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.578200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.582372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.582398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.582408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.586098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.077 [2024-04-27 00:59:50.586124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.077 [2024-04-27 00:59:50.586134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.077 [2024-04-27 00:59:50.589997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.590022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.590033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.594641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.594667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.594677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.599028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.599059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.599068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.603590] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.603616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.603626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.608508] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.608533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.608542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.614864] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.614891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.614900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.621363] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.621388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.621398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.626171] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.626194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.626204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.630896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.630920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.630930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.634988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.635015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.635025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.638083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.638109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.638119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.641251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.641277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.641287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.644405] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.644429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.644439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.648683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.648707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.648718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.652421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.652445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.652455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.656233] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.656256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.656266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.660345] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.660369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.660378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.665155] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.665180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.665189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.669131] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.669155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.669165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.673654] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.673680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.673694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.678767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.678791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.678801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.682421] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.682445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.682454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.686673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.686697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.686706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.691316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.691340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.691357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.695819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.695843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.695853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.701644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.701668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.701678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.078 [2024-04-27 00:59:50.708358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.078 [2024-04-27 00:59:50.708382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.078 [2024-04-27 00:59:50.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.712800] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.712823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.712833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.718247] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.718274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.718283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.723107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.723131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.723140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.725708] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.725734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.725743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.732468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.732493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.732503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.738629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.738652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.738662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.746070] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.746095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.746105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.752783] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.752807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.752817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.757744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.757768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.757778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.761479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.761503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.761521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.765219] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.765246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.765255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.079 [2024-04-27 00:59:50.768974] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.079 [2024-04-27 00:59:50.768998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.079 [2024-04-27 00:59:50.769007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.772719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.772744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.772753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.776501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.776524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.776534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.780383] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.780407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.780417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.784202] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.784235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.784245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.788109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.788134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.788144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.792921] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.792946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.792956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.799116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.799140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.799149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.805524] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.805549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.805558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.810735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.810760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.810769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.815942] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.815965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.815975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.819899] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.819924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.819934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.823822] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.823847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.823858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.828198] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.828229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.828239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.832655] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.832679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.832688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.838435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.838460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.838474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.845316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.845340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.845349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.850195] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.850224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.850234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.855038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.855062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.343 [2024-04-27 00:59:50.855072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.343 [2024-04-27 00:59:50.859886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.343 [2024-04-27 00:59:50.859910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.859919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.865211] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.865239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.865249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.872063] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.872088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.872097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.876844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.876872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.876883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.881643] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.881668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.881679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.886391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.886416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.886425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.890887] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.890912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.890921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.894564] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.894588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.894598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.898280] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.898307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.898317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.902045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.902076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.902086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.905735] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.905759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.905771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.909501] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.909524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.909535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.913244] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.913268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.913277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.916914] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.916939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.916953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.920572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.920597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.920607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.924283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.924308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.924320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.928020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.928046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.928056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.931938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.931963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.931973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.936631] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.936658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.936668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.940924] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.940947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.940959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.944740] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.944765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.944775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.949303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.949328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.949338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.955213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.955242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.955251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.961775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.961799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.961808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.966629] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.966653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.966663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.971375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.971400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.971409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.975310] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.975334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.344 [2024-04-27 00:59:50.975344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.344 [2024-04-27 00:59:50.978960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.344 [2024-04-27 00:59:50.978984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.978994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:50.982688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:50.982713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.982724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:50.986375] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:50.986399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.986409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:50.990092] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:50.990116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.990130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:50.993850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:50.993875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.993885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:50.997688] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:50.997715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:50.997724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.001797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.001822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.001832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.006549] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.006573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.006582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.011067] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.011089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.011099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.016169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.016193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.016202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.020059] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.020084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.020094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.023920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.023945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.023954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:25:58.345 [2024-04-27 00:59:51.027826] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x614000007240) 00:25:58.345 [2024-04-27 00:59:51.027856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:58.345 [2024-04-27 00:59:51.027866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:25:58.345 00:25:58.345 Latency(us) 00:25:58.345 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.345 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:25:58.345 nvme0n1 : 2.00 6644.38 830.55 0.00 0.00 2405.54 545.41 8692.14 00:25:58.345 =================================================================================================================== 00:25:58.345 Total : 6644.38 830.55 0.00 0.00 2405.54 545.41 8692.14 00:25:58.345 0 00:25:58.604 00:59:51 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:25:58.604 00:59:51 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:25:58.604 00:59:51 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:25:58.604 | .driver_specific 00:25:58.604 | .nvme_error 00:25:58.604 | .status_code 00:25:58.604 | .command_transient_transport_error' 00:25:58.604 00:59:51 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:25:58.604 00:59:51 -- host/digest.sh@71 -- # (( 428 > 0 )) 00:25:58.604 00:59:51 -- host/digest.sh@73 -- # killprocess 2904754 00:25:58.604 00:59:51 -- common/autotest_common.sh@936 -- # '[' -z 2904754 ']' 00:25:58.604 00:59:51 -- common/autotest_common.sh@940 -- # kill -0 2904754 00:25:58.604 00:59:51 -- common/autotest_common.sh@941 -- # uname 00:25:58.604 00:59:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.604 00:59:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2904754 00:25:58.604 00:59:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:58.604 00:59:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:58.604 00:59:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2904754' 00:25:58.604 killing process with pid 2904754 00:25:58.604 00:59:51 -- common/autotest_common.sh@955 -- # kill 2904754 00:25:58.604 Received shutdown signal, test time was about 2.000000 seconds 00:25:58.604 00:25:58.604 Latency(us) 00:25:58.604 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.604 =================================================================================================================== 00:25:58.604 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.604 00:59:51 -- common/autotest_common.sh@960 -- # wait 2904754 00:25:59.170 00:59:51 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:25:59.170 00:59:51 -- host/digest.sh@54 -- # local rw bs qd 00:25:59.170 00:59:51 -- host/digest.sh@56 -- # rw=randwrite 00:25:59.170 00:59:51 -- host/digest.sh@56 -- # bs=4096 00:25:59.170 00:59:51 -- host/digest.sh@56 -- # qd=128 00:25:59.170 00:59:51 -- host/digest.sh@58 -- # bperfpid=2905569 00:25:59.170 00:59:51 -- host/digest.sh@60 -- # waitforlisten 2905569 /var/tmp/bperf.sock 00:25:59.170 00:59:51 -- common/autotest_common.sh@817 -- # '[' -z 2905569 ']' 00:25:59.170 00:59:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:25:59.170 00:59:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.170 00:59:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:25:59.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:25:59.170 00:59:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.170 00:59:51 -- common/autotest_common.sh@10 -- # set +x 00:25:59.170 00:59:51 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:25:59.170 [2024-04-27 00:59:51.698716] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:25:59.170 [2024-04-27 00:59:51.698841] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2905569 ] 00:25:59.170 EAL: No free 2048 kB hugepages reported on node 1 00:25:59.170 [2024-04-27 00:59:51.812163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:59.429 [2024-04-27 00:59:51.900984] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.000 00:59:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:00.000 00:59:52 -- common/autotest_common.sh@850 -- # return 0 00:26:00.000 00:59:52 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.000 00:59:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:00.000 00:59:52 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:00.000 00:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.000 00:59:52 -- common/autotest_common.sh@10 -- # set +x 00:26:00.000 00:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.000 00:59:52 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.000 00:59:52 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:00.259 nvme0n1 00:26:00.259 00:59:52 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:00.259 00:59:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:00.259 00:59:52 -- common/autotest_common.sh@10 -- # set +x 00:26:00.259 00:59:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:00.259 00:59:52 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:00.259 00:59:52 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:00.259 Running I/O for 2 seconds... 00:26:00.259 [2024-04-27 00:59:52.934498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:26:00.259 [2024-04-27 00:59:52.935272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.260 [2024-04-27 00:59:52.935312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.260 [2024-04-27 00:59:52.944201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fac10 00:26:00.260 [2024-04-27 00:59:52.944935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:23860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.260 [2024-04-27 00:59:52.944968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:00.260 [2024-04-27 00:59:52.953084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:26:00.260 [2024-04-27 00:59:52.953805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:3581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.260 [2024-04-27 00:59:52.953834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:52.963023] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6cc8 00:26:00.538 [2024-04-27 00:59:52.963872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:5491 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:52.963897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:52.972956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:00.538 [2024-04-27 00:59:52.973944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23812 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:52.973970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:52.982874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e3060 00:26:00.538 [2024-04-27 00:59:52.983980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:15174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:52.984006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:52.992753] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f35f0 00:26:00.538 [2024-04-27 00:59:52.993992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:52.994019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.002656] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd208 00:26:00.538 [2024-04-27 00:59:53.004027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.004053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.012544] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:26:00.538 [2024-04-27 00:59:53.014048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.014075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.019251] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:26:00.538 [2024-04-27 00:59:53.019830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.019853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.028803] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:00.538 [2024-04-27 00:59:53.029375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:1258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.029401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.041504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:26:00.538 [2024-04-27 00:59:53.043292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:13345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.043319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.048767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:26:00.538 [2024-04-27 00:59:53.049465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.049489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.058307] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0350 00:26:00.538 [2024-04-27 00:59:53.058987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.059011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.067104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:00.538 [2024-04-27 00:59:53.067786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:19614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.067810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.076987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:00.538 [2024-04-27 00:59:53.077807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:12746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.077834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.086858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe2e8 00:26:00.538 [2024-04-27 00:59:53.087808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.087833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.096979] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:00.538 [2024-04-27 00:59:53.098072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.098097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.106863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:26:00.538 [2024-04-27 00:59:53.108074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.538 [2024-04-27 00:59:53.108099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:00.538 [2024-04-27 00:59:53.116726] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:26:00.539 [2024-04-27 00:59:53.118071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:20469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.118096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.126597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:26:00.539 [2024-04-27 00:59:53.128078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.128106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.136481] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:00.539 [2024-04-27 00:59:53.138096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:13574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.138127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.143175] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:26:00.539 [2024-04-27 00:59:53.143862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.143886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.152711] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:00.539 [2024-04-27 00:59:53.153393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.153419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.162227] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:26:00.539 [2024-04-27 00:59:53.162908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:24729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.170908] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6b70 00:26:00.539 [2024-04-27 00:59:53.171583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:10150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.171608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.180814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:26:00.539 [2024-04-27 00:59:53.181616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:16532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.181643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.190692] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:00.539 [2024-04-27 00:59:53.191626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:16845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.191651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.200567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb480 00:26:00.539 [2024-04-27 00:59:53.201628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:6156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.201655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.210429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f46d0 00:26:00.539 [2024-04-27 00:59:53.211627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:3874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.211651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.220337] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:26:00.539 [2024-04-27 00:59:53.221663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:5748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.221691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:00.539 [2024-04-27 00:59:53.228832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5be8 00:26:00.539 [2024-04-27 00:59:53.229702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.539 [2024-04-27 00:59:53.229728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:00.796 [2024-04-27 00:59:53.239180] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:26:00.796 [2024-04-27 00:59:53.240080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.240105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.249728] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:26:00.797 [2024-04-27 00:59:53.250670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.250696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.263610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:26:00.797 [2024-04-27 00:59:53.265255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.265283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.274000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e12d8 00:26:00.797 [2024-04-27 00:59:53.274987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:11651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.275012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.285798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:00.797 [2024-04-27 00:59:53.286719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:15060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.286743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.295870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:26:00.797 [2024-04-27 00:59:53.296697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.296722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.306822] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:26:00.797 [2024-04-27 00:59:53.307647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:14699 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.307679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.316930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:00.797 [2024-04-27 00:59:53.317817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:20729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.317842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.326801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:00.797 [2024-04-27 00:59:53.327814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.327838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.336654] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ff3c8 00:26:00.797 [2024-04-27 00:59:53.337800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:23522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.337825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.344646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1430 00:26:00.797 [2024-04-27 00:59:53.345128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.345150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.354508] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:26:00.797 [2024-04-27 00:59:53.355120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:6644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.355142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.364367] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:26:00.797 [2024-04-27 00:59:53.365114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.365141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.373135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6458 00:26:00.797 [2024-04-27 00:59:53.374458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:19169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.374481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.381395] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e27f0 00:26:00.797 [2024-04-27 00:59:53.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.382026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.391241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:00.797 [2024-04-27 00:59:53.391988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:18377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.392011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.401095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:26:00.797 [2024-04-27 00:59:53.401976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.401999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.410932] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:26:00.797 [2024-04-27 00:59:53.411939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.411963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.420774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7100 00:26:00.797 [2024-04-27 00:59:53.421914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.421939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.430598] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:00.797 [2024-04-27 00:59:53.431869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:5372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.431896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.440478] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:26:00.797 [2024-04-27 00:59:53.441883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:11509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.441909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.450316] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:00.797 [2024-04-27 00:59:53.451855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.797 [2024-04-27 00:59:53.451878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:00.797 [2024-04-27 00:59:53.456992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:26:00.797 [2024-04-27 00:59:53.457612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:21215 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.798 [2024-04-27 00:59:53.457635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:00.798 [2024-04-27 00:59:53.466498] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:00.798 [2024-04-27 00:59:53.467109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.798 [2024-04-27 00:59:53.467132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.798 [2024-04-27 00:59:53.475891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:00.798 [2024-04-27 00:59:53.476503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:18305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.798 [2024-04-27 00:59:53.476528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:00.798 [2024-04-27 00:59:53.485241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:26:00.798 [2024-04-27 00:59:53.485846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:18044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:00.798 [2024-04-27 00:59:53.485869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.494012] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:01.055 [2024-04-27 00:59:53.494615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.494638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.503889] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:01.055 [2024-04-27 00:59:53.504626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:21366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.504651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.513742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:26:01.055 [2024-04-27 00:59:53.514604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.514627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.523564] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:26:01.055 [2024-04-27 00:59:53.524558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.524582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.533402] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:26:01.055 [2024-04-27 00:59:53.534528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.534553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.543236] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:01.055 [2024-04-27 00:59:53.544497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.055 [2024-04-27 00:59:53.544520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:01.055 [2024-04-27 00:59:53.553068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e2c28 00:26:01.056 [2024-04-27 00:59:53.554466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.554489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.562901] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.056 [2024-04-27 00:59:53.564433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:13906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.564457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.569591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df118 00:26:01.056 [2024-04-27 00:59:53.570190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.570212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.579097] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:26:01.056 [2024-04-27 00:59:53.579698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.579721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.587863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:26:01.056 [2024-04-27 00:59:53.588457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.588483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.597698] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9168 00:26:01.056 [2024-04-27 00:59:53.598420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.598445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.607534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ddc00 00:26:01.056 [2024-04-27 00:59:53.608387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:25410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.608410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.617370] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:01.056 [2024-04-27 00:59:53.618352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:20616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.618374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.628523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:01.056 [2024-04-27 00:59:53.630030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.630054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.635199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:01.056 [2024-04-27 00:59:53.635780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.635804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.644702] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:01.056 [2024-04-27 00:59:53.645288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:14061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.645313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.654107] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:01.056 [2024-04-27 00:59:53.654690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.654714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.662887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fe720 00:26:01.056 [2024-04-27 00:59:53.663466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.663490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.672738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:26:01.056 [2024-04-27 00:59:53.673444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:25217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.673467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.682580] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8e88 00:26:01.056 [2024-04-27 00:59:53.683417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.683443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.692417] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:26:01.056 [2024-04-27 00:59:53.693385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:7187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.693410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.702257] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4140 00:26:01.056 [2024-04-27 00:59:53.703362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.703385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.712094] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:01.056 [2024-04-27 00:59:53.713336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.713364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.721941] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e23b8 00:26:01.056 [2024-04-27 00:59:53.723311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:13509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.723335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.731779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:01.056 [2024-04-27 00:59:53.733279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14637 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.733304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.738461] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:01.056 [2024-04-27 00:59:53.739029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.739053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:01.056 [2024-04-27 00:59:53.747956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa7d8 00:26:01.056 [2024-04-27 00:59:53.748533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.056 [2024-04-27 00:59:53.748557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.759261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:26:01.313 [2024-04-27 00:59:53.760362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:4720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.760387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.768523] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:26:01.313 [2024-04-27 00:59:53.769620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.769643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.777893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6fa8 00:26:01.313 [2024-04-27 00:59:53.778975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.778998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.786665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:01.313 [2024-04-27 00:59:53.787751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.787773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.796509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:01.313 [2024-04-27 00:59:53.797722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:14442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.797745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.806358] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:26:01.313 [2024-04-27 00:59:53.807699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.807723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.816207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df988 00:26:01.313 [2024-04-27 00:59:53.817684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.817707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.826194] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2948 00:26:01.313 [2024-04-27 00:59:53.827810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:14372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.827834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.832884] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaab8 00:26:01.313 [2024-04-27 00:59:53.833570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:25514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.833593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.842393] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:01.313 [2024-04-27 00:59:53.843067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.843089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.851779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195df550 00:26:01.313 [2024-04-27 00:59:53.852461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.852485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.861135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.313 [2024-04-27 00:59:53.861809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.869898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:01.313 [2024-04-27 00:59:53.870575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:5848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.870597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.879767] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:26:01.313 [2024-04-27 00:59:53.880564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.880587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.889665] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:01.313 [2024-04-27 00:59:53.890593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:6844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.890618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.899501] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f20d8 00:26:01.313 [2024-04-27 00:59:53.900561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.900586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.909356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:26:01.313 [2024-04-27 00:59:53.910556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.910581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:01.313 [2024-04-27 00:59:53.919201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef270 00:26:01.313 [2024-04-27 00:59:53.920527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:15266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.313 [2024-04-27 00:59:53.920551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.929032] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6738 00:26:01.314 [2024-04-27 00:59:53.930494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.930518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.938870] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eea00 00:26:01.314 [2024-04-27 00:59:53.940463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:16707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.940487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.945554] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eaef0 00:26:01.314 [2024-04-27 00:59:53.946223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:4107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.946246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.955051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed920 00:26:01.314 [2024-04-27 00:59:53.955716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.955740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.963835] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:26:01.314 [2024-04-27 00:59:53.964488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.964511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.973675] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:01.314 [2024-04-27 00:59:53.974463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.974486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.983512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:26:01.314 [2024-04-27 00:59:53.984437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.984460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:53.993344] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1ca0 00:26:01.314 [2024-04-27 00:59:53.994396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:12712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:53.994419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:01.314 [2024-04-27 00:59:54.003189] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:26:01.314 [2024-04-27 00:59:54.004391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.314 [2024-04-27 00:59:54.004414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.013055] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:01.572 [2024-04-27 00:59:54.014368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:12484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.014392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.022892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e95a0 00:26:01.572 [2024-04-27 00:59:54.024343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.024366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.032722] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fb048 00:26:01.572 [2024-04-27 00:59:54.034307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.034332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.039404] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:01.572 [2024-04-27 00:59:54.040055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:13664 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.040077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.049250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:26:01.572 [2024-04-27 00:59:54.050036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.050060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.059083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:26:01.572 [2024-04-27 00:59:54.060009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.060032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.068933] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fd640 00:26:01.572 [2024-04-27 00:59:54.069984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:2355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.070008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.078496] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea248 00:26:01.572 [2024-04-27 00:59:54.079541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:8121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.079565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.087269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:01.572 [2024-04-27 00:59:54.088311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:25112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.088334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.097305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7818 00:26:01.572 [2024-04-27 00:59:54.098481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.098505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.107138] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:01.572 [2024-04-27 00:59:54.108449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:7305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.108475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.116976] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fef90 00:26:01.572 [2024-04-27 00:59:54.118416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:17987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.118445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.126814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:26:01.572 [2024-04-27 00:59:54.128396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.128422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.133504] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e88f8 00:26:01.572 [2024-04-27 00:59:54.134150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.134174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.143013] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:26:01.572 [2024-04-27 00:59:54.143653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.143676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.151773] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc998 00:26:01.572 [2024-04-27 00:59:54.152409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.152432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.161605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:01.572 [2024-04-27 00:59:54.162366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:12893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.162389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.172781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc128 00:26:01.572 [2024-04-27 00:59:54.174075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:9500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.174099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.182622] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:26:01.572 [2024-04-27 00:59:54.184042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.184065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.192473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee190 00:26:01.572 [2024-04-27 00:59:54.194027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.194053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.199152] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e7c50 00:26:01.572 [2024-04-27 00:59:54.199790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:17411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.199815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.208653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195edd58 00:26:01.572 [2024-04-27 00:59:54.209284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.209308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.217425] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e73e0 00:26:01.572 [2024-04-27 00:59:54.218048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.218070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.227261] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:01.572 [2024-04-27 00:59:54.228019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:17422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.228049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:01.572 [2024-04-27 00:59:54.237106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f5378 00:26:01.572 [2024-04-27 00:59:54.237991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.572 [2024-04-27 00:59:54.238016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:01.573 [2024-04-27 00:59:54.246936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:26:01.573 [2024-04-27 00:59:54.247955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.573 [2024-04-27 00:59:54.247978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:01.573 [2024-04-27 00:59:54.256778] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:01.573 [2024-04-27 00:59:54.257923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.573 [2024-04-27 00:59:54.257946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:01.573 [2024-04-27 00:59:54.266625] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:01.573 [2024-04-27 00:59:54.267911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:8149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.573 [2024-04-27 00:59:54.267937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.276484] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:26:01.838 [2024-04-27 00:59:54.277894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:6868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.277922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.286321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:26:01.838 [2024-04-27 00:59:54.287866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:13169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.287890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.292992] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e6300 00:26:01.838 [2024-04-27 00:59:54.293614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.293638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.302493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1710 00:26:01.838 [2024-04-27 00:59:54.303119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:21625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.303141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.311959] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f31b8 00:26:01.838 [2024-04-27 00:59:54.312582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:4900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.312605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.320600] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3a28 00:26:01.838 [2024-04-27 00:59:54.321209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:15629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.321238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.330440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:01.838 [2024-04-27 00:59:54.331176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:21264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.331203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.341629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:01.838 [2024-04-27 00:59:54.342893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.342919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.351479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ee5c8 00:26:01.838 [2024-04-27 00:59:54.352865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:7289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.352890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.360830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:01.838 [2024-04-27 00:59:54.362215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.362245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.370690] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:01.838 [2024-04-27 00:59:54.372199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12990 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.372228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.377371] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fa3a0 00:26:01.838 [2024-04-27 00:59:54.377967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.377991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.386883] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:26:01.838 [2024-04-27 00:59:54.387467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:24800 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.387490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.396248] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f1868 00:26:01.838 [2024-04-27 00:59:54.396834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:25228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.396856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.405737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.838 [2024-04-27 00:59:54.406321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:25075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.406344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.416795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.838 [2024-04-27 00:59:54.417886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:4358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.417909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.425583] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.838 [2024-04-27 00:59:54.426668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:22672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.426691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.436761] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb328 00:26:01.838 [2024-04-27 00:59:54.438364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.438389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.443451] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e38d0 00:26:01.838 [2024-04-27 00:59:54.444129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10564 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.838 [2024-04-27 00:59:54.444152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:01.838 [2024-04-27 00:59:54.454090] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fda78 00:26:01.839 [2024-04-27 00:59:54.454964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:16382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.454993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.464216] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:01.839 [2024-04-27 00:59:54.464892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.464917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.474101] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:26:01.839 [2024-04-27 00:59:54.474910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.474940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.483977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f0bc0 00:26:01.839 [2024-04-27 00:59:54.484922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.484946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.493856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fbcf0 00:26:01.839 [2024-04-27 00:59:54.494925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:21937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.494951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.503737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f4f40 00:26:01.839 [2024-04-27 00:59:54.504943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8630 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.504968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.513611] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f96f8 00:26:01.839 [2024-04-27 00:59:54.514947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.514973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.523479] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e1b48 00:26:01.839 [2024-04-27 00:59:54.524953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:01.839 [2024-04-27 00:59:54.524983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:01.839 [2024-04-27 00:59:54.533362] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e49b0 00:26:02.099 [2024-04-27 00:59:54.534962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:11920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.534990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.540067] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fdeb0 00:26:02.099 [2024-04-27 00:59:54.540741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.540764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.549615] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:02.099 [2024-04-27 00:59:54.550291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:13696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.550323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.559021] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eff18 00:26:02.099 [2024-04-27 00:59:54.559697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.559721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.568406] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:26:02.099 [2024-04-27 00:59:54.569061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:953 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.569084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.579040] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195fc560 00:26:02.099 [2024-04-27 00:59:54.580237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.099 [2024-04-27 00:59:54.580261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:02.099 [2024-04-27 00:59:54.588928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ef6a8 00:26:02.099 [2024-04-27 00:59:54.590252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:17925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.590276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.597121] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8a50 00:26:02.100 [2024-04-27 00:59:54.597913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:5298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.597937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.606512] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ec840 00:26:02.100 [2024-04-27 00:59:54.607307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:5372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.607330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.615318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:26:02.100 [2024-04-27 00:59:54.616086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.616111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.625347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:26:02.100 [2024-04-27 00:59:54.626113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:18926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.626136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.634153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:26:02.100 [2024-04-27 00:59:54.634921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:4802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.634946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.644036] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de8a8 00:26:02.100 [2024-04-27 00:59:54.644936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:8920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.644961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.653912] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ea680 00:26:02.100 [2024-04-27 00:59:54.654945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.654974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.663799] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5ec8 00:26:02.100 [2024-04-27 00:59:54.664967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.664993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.673704] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ed0b0 00:26:02.100 [2024-04-27 00:59:54.674997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:18771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.675023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.683587] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e0ea0 00:26:02.100 [2024-04-27 00:59:54.685011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:2175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.685043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.693455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e8d30 00:26:02.100 [2024-04-27 00:59:54.695018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:4513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.695043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.700159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e4578 00:26:02.100 [2024-04-27 00:59:54.700806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.700839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.709708] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de038 00:26:02.100 [2024-04-27 00:59:54.710348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:13640 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.710373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.718513] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f9b30 00:26:02.100 [2024-04-27 00:59:54.719145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.719168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.728401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195dece0 00:26:02.100 [2024-04-27 00:59:54.729158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:21136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.729184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.738282] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:26:02.100 [2024-04-27 00:59:54.739164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.739188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.749499] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f57b0 00:26:02.100 [2024-04-27 00:59:54.750905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.750929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.759373] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2d80 00:26:02.100 [2024-04-27 00:59:54.760916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.760942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.766079] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e99d8 00:26:02.100 [2024-04-27 00:59:54.766706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.766730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.775624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f8618 00:26:02.100 [2024-04-27 00:59:54.776242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.776265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.784419] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eb760 00:26:02.100 [2024-04-27 00:59:54.785033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.785057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:02.100 [2024-04-27 00:59:54.794304] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:26:02.100 [2024-04-27 00:59:54.795042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.100 [2024-04-27 00:59:54.795066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.804185] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195efae0 00:26:02.360 [2024-04-27 00:59:54.805060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15876 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.805084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.814053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195de470 00:26:02.360 [2024-04-27 00:59:54.815063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:4250 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.815086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.823931] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f2510 00:26:02.360 [2024-04-27 00:59:54.825071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:23602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.825096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.833929] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e9e10 00:26:02.360 [2024-04-27 00:59:54.835212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.835248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.843825] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ebfd0 00:26:02.360 [2024-04-27 00:59:54.845232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:2288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.845258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.853709] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195eee38 00:26:02.360 [2024-04-27 00:59:54.855242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.855266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.861891] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f7538 00:26:02.360 [2024-04-27 00:59:54.862892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.862915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.871292] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f3e60 00:26:02.360 [2024-04-27 00:59:54.872283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.872306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.880088] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:26:02.360 [2024-04-27 00:59:54.881073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.881098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.891321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195ecc78 00:26:02.360 [2024-04-27 00:59:54.892836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.892861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.898022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195feb58 00:26:02.360 [2024-04-27 00:59:54.898611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.898634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.907586] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195f6890 00:26:02.360 [2024-04-27 00:59:54.908162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.908187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:02.360 [2024-04-27 00:59:54.917201] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000004480) with pdu=0x2000195e5658 00:26:02.360 [2024-04-27 00:59:54.917789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:02.360 [2024-04-27 00:59:54.917816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:02.360 00:26:02.360 Latency(us) 00:26:02.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.360 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:02.360 nvme0n1 : 2.00 26744.62 104.47 0.00 0.00 4779.47 2414.48 15590.67 00:26:02.360 =================================================================================================================== 00:26:02.360 Total : 26744.62 104.47 0.00 0.00 4779.47 2414.48 15590.67 00:26:02.360 0 00:26:02.360 00:59:54 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:02.360 00:59:54 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:02.360 | .driver_specific 00:26:02.360 | .nvme_error 00:26:02.360 | .status_code 00:26:02.360 | .command_transient_transport_error' 00:26:02.360 00:59:54 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:02.360 00:59:54 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:02.618 00:59:55 -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:02.618 00:59:55 -- host/digest.sh@73 -- # killprocess 2905569 00:26:02.618 00:59:55 -- common/autotest_common.sh@936 -- # '[' -z 2905569 ']' 00:26:02.618 00:59:55 -- common/autotest_common.sh@940 -- # kill -0 2905569 00:26:02.618 00:59:55 -- common/autotest_common.sh@941 -- # uname 00:26:02.618 00:59:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:02.618 00:59:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2905569 00:26:02.618 00:59:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:02.618 00:59:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:02.618 00:59:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2905569' 00:26:02.618 killing process with pid 2905569 00:26:02.618 00:59:55 -- common/autotest_common.sh@955 -- # kill 2905569 00:26:02.618 Received shutdown signal, test time was about 2.000000 seconds 00:26:02.618 00:26:02.618 Latency(us) 00:26:02.619 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:02.619 =================================================================================================================== 00:26:02.619 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:02.619 00:59:55 -- common/autotest_common.sh@960 -- # wait 2905569 00:26:02.876 00:59:55 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:02.876 00:59:55 -- host/digest.sh@54 -- # local rw bs qd 00:26:02.876 00:59:55 -- host/digest.sh@56 -- # rw=randwrite 00:26:02.876 00:59:55 -- host/digest.sh@56 -- # bs=131072 00:26:02.876 00:59:55 -- host/digest.sh@56 -- # qd=16 00:26:02.876 00:59:55 -- host/digest.sh@58 -- # bperfpid=2906268 00:26:02.876 00:59:55 -- host/digest.sh@60 -- # waitforlisten 2906268 /var/tmp/bperf.sock 00:26:02.876 00:59:55 -- common/autotest_common.sh@817 -- # '[' -z 2906268 ']' 00:26:02.876 00:59:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:02.876 00:59:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:02.876 00:59:55 -- host/digest.sh@57 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:02.876 00:59:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:02.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:02.876 00:59:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:02.876 00:59:55 -- common/autotest_common.sh@10 -- # set +x 00:26:02.876 [2024-04-27 00:59:55.558942] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:02.876 [2024-04-27 00:59:55.559058] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2906268 ] 00:26:02.876 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:02.876 Zero copy mechanism will not be used. 00:26:03.133 EAL: No free 2048 kB hugepages reported on node 1 00:26:03.133 [2024-04-27 00:59:55.671280] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:03.133 [2024-04-27 00:59:55.760896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:03.699 00:59:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:03.699 00:59:56 -- common/autotest_common.sh@850 -- # return 0 00:26:03.699 00:59:56 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.699 00:59:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:03.959 00:59:56 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:03.959 00:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.959 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:26:03.959 00:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.959 00:59:56 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.959 00:59:56 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:03.959 nvme0n1 00:26:03.959 00:59:56 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:03.959 00:59:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:03.959 00:59:56 -- common/autotest_common.sh@10 -- # set +x 00:26:03.959 00:59:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:03.959 00:59:56 -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:03.959 00:59:56 -- host/digest.sh@19 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:04.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:04.222 Zero copy mechanism will not be used. 00:26:04.222 Running I/O for 2 seconds... 00:26:04.222 [2024-04-27 00:59:56.721568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.721833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.721877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.726642] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.726877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.726910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.731276] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.731521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.731552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.736015] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.736248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.736276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.740900] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.741132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.741160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.746609] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.746836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.746865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.751212] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.751451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.751478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.755643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.755880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.755907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.760112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.760340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.760364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.764505] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.764727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.764753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.768899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.769121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.769145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.773162] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.773386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.773414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.777454] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.777686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.777712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.781648] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.781869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.781897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.786545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.786767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.786792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.790830] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.791050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.791075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.795106] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.795332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.795357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.799219] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.799446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.799469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.803680] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.803912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.803938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.809324] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.809557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.809582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.813629] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.813850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.813876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.817847] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.818067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.818091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.821875] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.822099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.822122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.826000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.826231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.826254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.222 [2024-04-27 00:59:56.830246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.222 [2024-04-27 00:59:56.830469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.222 [2024-04-27 00:59:56.830498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.834288] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.834507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.834532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.838567] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.838781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.838807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.842685] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.842904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.842928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.846899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.847129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.847156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.850858] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.851088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.851113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.855640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.855845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.855875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.860294] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.860523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.860548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.866305] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.866523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.866548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.872797] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.873016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.873044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.880255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.880483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.887007] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.887242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.887266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.892986] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.893210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.893241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.897760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.897980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.898005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.902542] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.902762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.902786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.907720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.907949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.907973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.223 [2024-04-27 00:59:56.913998] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.223 [2024-04-27 00:59:56.914232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.223 [2024-04-27 00:59:56.914257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.920240] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.920469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.920495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.926624] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.926840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.926865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.932574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.932791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.932816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.937780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.937997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.938021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.942205] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.942440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.942464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.947061] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.947284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.947308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.950789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.951007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.951031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.955048] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.955281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.955304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.960433] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.960539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.960563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.965241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.965458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.965481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.969434] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.969663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.969687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.973489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.973703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.973728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.977250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.977468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.977493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.980937] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.981152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.981175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.984662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.984878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.984903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.988863] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.989090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.989113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.994298] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.994518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.994546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:56.999195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:56.999417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:56.999441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:57.003455] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:57.003672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:57.003697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:57.007612] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:57.007831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:57.007855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:57.011652] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:57.011877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:57.011902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:57.015811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:57.016040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.487 [2024-04-27 00:59:57.016064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.487 [2024-04-27 00:59:57.019975] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.487 [2024-04-27 00:59:57.020193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.020224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.024111] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.024334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.024360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.028321] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.028539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.028563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.032517] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.032735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.032760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.036746] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.036960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.036986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.040816] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.040885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.040913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.045287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.045506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.045533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.049630] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.049848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.049874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.053954] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.054175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.054200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.058197] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.058420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.058445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.062482] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.062701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.062729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.066664] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.066885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.066913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.070872] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.071103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.071128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.074977] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.075199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.075229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.078724] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.078939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.078962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.082540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.082760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.082786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.086553] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.086774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.086800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.091112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.091343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.091367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.094801] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.095021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.095046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.098365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.098578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.098603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.101641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.101854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.101881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.105531] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.105746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.105770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.109741] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.109971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.109996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.115231] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.115445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.115469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.119866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.120087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.120111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.123760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.123978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.124002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.127774] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.128002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.488 [2024-04-27 00:59:57.128029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.488 [2024-04-27 00:59:57.131659] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.488 [2024-04-27 00:59:57.131877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.131907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.135095] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.135316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.135341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.139306] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.139522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.139548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.142663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.142877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.142902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.146199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.146418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.146444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.150051] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.150274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.150298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.153943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.154157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.154181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.159091] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.159308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.159337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.164681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.164896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.164921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.168985] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.169215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.169254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.173237] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.173451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.173477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.176794] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.177010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.177034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.489 [2024-04-27 00:59:57.180596] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.489 [2024-04-27 00:59:57.180812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.489 [2024-04-27 00:59:57.180839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.183879] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.184094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.187781] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.187993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.188018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.193018] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.193235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.193264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.198780] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.198995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.199019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.202850] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.202952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.202980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.208084] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.208308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.208336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.213866] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.214080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.214105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.218256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.218480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.218505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.222202] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.222421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.222449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.225848] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.226063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.226087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.229970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.230184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.230208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.233480] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.233693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.233717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.237267] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.237483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.237507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.240557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.240777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.240800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.243789] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.244001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.244024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.247062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.247278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.247303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.250338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.250555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.250580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.253632] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.253841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.253865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.256914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.257132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.257156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.260228] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.260445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.750 [2024-04-27 00:59:57.260468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.750 [2024-04-27 00:59:57.264165] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.750 [2024-04-27 00:59:57.264384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.264407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.269332] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.269547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.269576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.275104] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.275329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.275353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.280739] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.280961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.280985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.286719] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.286946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.286973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.293812] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.294034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.294059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.300535] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.300735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.300759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.306509] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.306726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.306752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.312605] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.312820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.312849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.318588] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.318791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.318817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.324522] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.324737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.324763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.330526] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.330750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.330778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.337907] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.338124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.338151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.344054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.344277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.344304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.350066] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.350299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.350323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.357518] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.357745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.357771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.363135] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.363353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.363378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.367312] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.367528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.367553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.370764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.370975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.374030] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.374246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.374272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.377349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.377561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.377587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.380982] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.381202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.381235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.385663] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.385876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.385901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.390314] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.390528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.390557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.394081] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.394314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.394341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.397760] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.397972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.397998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.401427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.401645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.751 [2024-04-27 00:59:57.401670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.751 [2024-04-27 00:59:57.405230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.751 [2024-04-27 00:59:57.405447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.405471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.409759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.409973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.409999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.413844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.414054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.414079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.417072] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.417293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.417317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.421074] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.421293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.421317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.425283] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.425501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.425524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.428591] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.428806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.428830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.432412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.432628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.432655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.435874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.436084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.436108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.439167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.439385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.439409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:04.752 [2024-04-27 00:59:57.443398] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:04.752 [2024-04-27 00:59:57.443609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.752 [2024-04-27 00:59:57.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.011 [2024-04-27 00:59:57.449017] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.011 [2024-04-27 00:59:57.449233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.011 [2024-04-27 00:59:57.449260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.011 [2024-04-27 00:59:57.455038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.011 [2024-04-27 00:59:57.455248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.011 [2024-04-27 00:59:57.455280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.011 [2024-04-27 00:59:57.462168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.011 [2024-04-27 00:59:57.462394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.462420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.468515] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.468728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.468754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.474490] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.474704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.474731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.480832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.481055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.481081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.487994] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.488212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.488243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.494058] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.494279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.494304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.500038] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.500244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.500271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.507798] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.508022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.508052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.514054] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.514272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.514301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.520243] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.520470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.520499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.526340] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.526556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.526583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.533426] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.533651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.533680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.539856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.540084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.540111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.544520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.544738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.544764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.547829] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.548040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.548064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.551068] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.551283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.551307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.554355] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.554570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.554594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.557650] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.557862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.560928] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.561141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.561167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.564250] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.564466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.564491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.568134] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.568354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.568379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.571951] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.572200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.572243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.576336] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.576596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.576626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.580594] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.580840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.580866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.585287] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.585501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.585525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.590301] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.590523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.590553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.595195] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.595417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.012 [2024-04-27 00:59:57.595442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.012 [2024-04-27 00:59:57.600118] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.012 [2024-04-27 00:59:57.600347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.600373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.604779] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.604992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.605016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.610047] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.610252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.610276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.615128] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.615355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.615381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.619956] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.620166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.620190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.624930] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.625145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.625169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.629705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.629912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.629939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.634918] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.635129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.635158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.639635] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.639847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.639873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.644792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.645023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.645049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.649980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.650204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.650234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.653489] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.653548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.653576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.657142] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.657200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.657229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.661129] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.661362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.661387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.665008] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.665233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.665259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.668560] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.668786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.668812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.672628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.672852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.672877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.676389] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.676602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.676628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.679705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.679917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.679942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.683303] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.683528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.683553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.687319] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.687543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.687572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.692083] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.692309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.692336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.695962] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.696179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.696209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.699720] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.699936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.699962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.013 [2024-04-27 00:59:57.703269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.013 [2024-04-27 00:59:57.703485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.013 [2024-04-27 00:59:57.703510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.014 [2024-04-27 00:59:57.706603] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.014 [2024-04-27 00:59:57.706822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.014 [2024-04-27 00:59:57.706846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.710252] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.710470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.710495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.715077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.715296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.715321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.721039] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.721243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.721275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.726686] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.726915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.726941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.733574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.733806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.733835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.739022] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.739268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.739295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.743386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.743615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.743641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.747053] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.747288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.747314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.750565] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.750789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.750817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.754182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.754410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.754438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.758533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.758746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.758771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.763756] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.763963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.763988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.767714] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.767939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.767964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.771610] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.771824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.771849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.775291] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.775505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.775529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.779060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.779277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.779304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.782329] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.782540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.782566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.786160] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.786421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.789874] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.790087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.790113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.794182] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.794420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.794452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.799679] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.799892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.799917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.804597] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.804814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.804842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.809181] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.809399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.809426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.814744] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.274 [2024-04-27 00:59:57.815025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.274 [2024-04-27 00:59:57.815052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.274 [2024-04-27 00:59:57.819123] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.819333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.819363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.822745] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.822943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.822969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.825905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.826116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.826142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.829449] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.829653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.829681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.833148] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.833362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.833391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.836382] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.836580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.836604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.839559] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.839759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.839793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.843158] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.843367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.843392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.847725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.847926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.847954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.852255] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.852511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.852538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.857351] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.857540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.857565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.861001] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.861190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.861217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.864557] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.864745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.864769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.867902] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.868089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.868114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.871045] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.871238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.871263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.874159] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.874348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.874372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.877318] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.877505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.877528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.880905] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.881098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.881121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.885730] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.885926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.885952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.889468] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.889661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.889685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.893035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.893231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.893256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.896943] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.897135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.897163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.901628] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.901914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.901939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.907100] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.907390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.907417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.912520] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.912829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.912855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.275 [2024-04-27 00:59:57.917984] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.275 [2024-04-27 00:59:57.918282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.275 [2024-04-27 00:59:57.918308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.923450] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.923759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.923783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.928796] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.929105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.929130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.934174] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.934458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.934485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.939514] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.939811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.939836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.944967] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.945180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.945205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.950269] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.950572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.950596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.955738] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.956038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.956066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.961338] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.961625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.961649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.276 [2024-04-27 00:59:57.966571] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.276 [2024-04-27 00:59:57.966858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.276 [2024-04-27 00:59:57.966882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.972208] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.972445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.972469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.978075] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.978368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.978396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.984120] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.984352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.984378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.990811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.991001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.991031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.995497] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.995687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.995711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:57.998970] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:57.999156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:57.999180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.002035] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.002227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.002251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.005170] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.005363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.005387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.008273] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.008462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.008486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.011899] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.012088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.012112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.016427] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.016618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.016644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.021429] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.021633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.021659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.025213] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.025407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.025434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.028683] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.028871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.028895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.032432] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.032617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.032644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.037697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.037985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.038010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.043167] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.043369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.043393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.049601] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.049842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.049867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.055716] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.055907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.535 [2024-04-27 00:59:58.055932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.535 [2024-04-27 00:59:58.060028] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.535 [2024-04-27 00:59:58.060225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.060253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.063347] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.063534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.063563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.066593] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.066819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.069721] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.069919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.069944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.072887] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.073084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.073111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.076256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.076454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.076481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.080643] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.080833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.080858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.084993] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.085184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.085210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.088640] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.088829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.088855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.092536] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.092736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.092760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.096705] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.096901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.096925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.101445] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.101653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.101677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.107168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.107487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.107513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.113910] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.114188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.114214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.120037] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.120288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.120313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.125737] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.126019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.126044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.131546] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.131857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.131884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.137414] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.137714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.137740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.144653] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.144862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.144891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.150786] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.151103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.151127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.156558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.156839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.156867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.162440] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.162708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.162734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.168232] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.168541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.168566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.173968] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.174272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.174298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.536 [2024-04-27 00:59:58.179844] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.536 [2024-04-27 00:59:58.180128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.536 [2024-04-27 00:59:58.180154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.185561] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.185869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.185894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.191333] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.191570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.191595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.195893] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.196096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.196121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.199380] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.199573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.199597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.202466] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.202656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.202682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.205585] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.205778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.205810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.208696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.208891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.208917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.212065] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.212258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.212281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.215936] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.216126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.216149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.220569] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.220739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.220763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.224033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.224196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.224228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.537 [2024-04-27 00:59:58.227548] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.537 [2024-04-27 00:59:58.227721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.537 [2024-04-27 00:59:58.227750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.799 [2024-04-27 00:59:58.231011] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.799 [2024-04-27 00:59:58.231184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.799 [2024-04-27 00:59:58.231208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.799 [2024-04-27 00:59:58.234374] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.234542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.234566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.237766] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.237931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.237954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.241263] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.241433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.241458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.244765] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.244936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.244962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.248346] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.248514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.248538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.251696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.251865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.251888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.255620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.255844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.255867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.259538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.259722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.259745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.263378] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.263559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.263583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.267199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.267420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.267444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.271085] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.271260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.271283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.275621] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.275792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.275817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.279834] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.280010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.280040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.283785] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.283956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.283981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.288368] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.288575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.288601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.294168] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.294378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.294403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.299277] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.299446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.299474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.302894] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.303063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.303089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.305898] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.306064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.306088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.308913] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.309078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.309104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.312401] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.312567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.312590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.317064] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.317314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.317337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.322646] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.322871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.322895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.327792] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.327950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.327973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.334356] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.334606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.334634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.339132] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.339319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.339345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.342473] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.800 [2024-04-27 00:59:58.342643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.800 [2024-04-27 00:59:58.342668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.800 [2024-04-27 00:59:58.345533] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.345700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.345728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.348641] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.348790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.348816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.351811] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.351969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.351994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.355987] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.356139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.356164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.359191] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.359347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.359374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.362349] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.362498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.362526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.365558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.365722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.365753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.368813] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.368963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.368988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.372020] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.372189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.372215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.375215] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.375378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.375408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.378348] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.378496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.378523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.381545] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.381703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.381732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.384757] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.384901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.384925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.387856] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.388004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.388040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.391077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.391228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.391255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.394241] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.394389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.394414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.397441] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.397586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.397609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.400528] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.400672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.400696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.403706] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.403851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.403874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.406914] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.407061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.407085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.410114] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.410264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.410288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.413207] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.413355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.413380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.416385] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.416529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.416553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.419521] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.419671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.419697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.422681] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.422829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.422853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.425857] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.426004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.426027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.801 [2024-04-27 00:59:58.428980] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.801 [2024-04-27 00:59:58.429126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.801 [2024-04-27 00:59:58.429148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.432099] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.432254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.432278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.435293] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.435438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.435462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.438477] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.438622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.438646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.441644] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.441787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.441815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.444804] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.444950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.444973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.447997] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.448144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.448167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.451112] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.451267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.451293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.454386] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.454533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.454558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.457534] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.457682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.457705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.460617] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.460763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.460785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.463800] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.463945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.463969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.466940] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.467084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.467109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.470153] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.470307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.470332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.473256] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.473403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.473424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.476397] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.476542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.476563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.479495] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.479641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.479664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.482637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.482780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.482803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.485824] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.485980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.486004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.488960] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.489108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.489130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:05.802 [2024-04-27 00:59:58.492199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:05.802 [2024-04-27 00:59:58.492349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:05.802 [2024-04-27 00:59:58.492374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.495365] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.495511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.495541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.498540] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.498688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.498712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.501696] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.501840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.501863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.504832] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.504979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.505001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.508026] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.508174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.508203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.511233] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.511378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.511402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.514735] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.514931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.514954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.518934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.519144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.519170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.523934] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.524121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.524144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.529493] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.529686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.529710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.535574] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.535786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.535815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.540551] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.540738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.540764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.545552] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.545807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.545831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.550636] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.550887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.550910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.555670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.555857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.555882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.560670] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.560919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.560943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.565731] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.565890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.565913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.570697] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.570954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.570981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.575725] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.575914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.575937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.580742] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.580993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.581017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.585764] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.585947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.585972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.590821] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.591030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.591053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.595852] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.596102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.596127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.600892] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.066 [2024-04-27 00:59:58.601069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.066 [2024-04-27 00:59:58.601093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.066 [2024-04-27 00:59:58.606000] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.606269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.606293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.610122] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.610309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.610335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.613062] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.613210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.613238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.615818] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.615965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.615990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.618626] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.618769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.618792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.621645] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.621793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.621814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.624749] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.624897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.624925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.628662] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.628838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.628864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.631538] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.631687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.631713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.634345] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.634490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.634517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.637149] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.637301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.637331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.640176] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.640323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.640348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.644131] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.644290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.644315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.648325] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.648472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.648496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.651558] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.651702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.651726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.654637] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.654783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.654806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.657795] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.657939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.657963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.661060] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.661204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.661234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.664230] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.664375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.664398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.667412] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.667564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.667587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.670623] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.670769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.670792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.673814] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.673957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.673979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.677077] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.677229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.677259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.680246] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.680393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.680419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.683423] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.683572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.683597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.686620] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.686765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.067 [2024-04-27 00:59:58.686789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.067 [2024-04-27 00:59:58.689759] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.067 [2024-04-27 00:59:58.689902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.689925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.068 [2024-04-27 00:59:58.692888] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.068 [2024-04-27 00:59:58.693033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.693060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.068 [2024-04-27 00:59:58.696033] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.068 [2024-04-27 00:59:58.696178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.696204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.068 [2024-04-27 00:59:58.699199] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.068 [2024-04-27 00:59:58.699352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.699377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.068 [2024-04-27 00:59:58.702428] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.068 [2024-04-27 00:59:58.702573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.702596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.068 [2024-04-27 00:59:58.705568] tcp.c:2047:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x618000005080) with pdu=0x2000195fef90 00:26:06.068 [2024-04-27 00:59:58.705713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.068 [2024-04-27 00:59:58.705737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.068 00:26:06.068 Latency(us) 00:26:06.068 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.068 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:06.068 nvme0n1 : 2.00 7161.59 895.20 0.00 0.00 2230.63 1302.10 13659.08 00:26:06.068 =================================================================================================================== 00:26:06.068 Total : 7161.59 895.20 0.00 0.00 2230.63 1302.10 13659.08 00:26:06.068 0 00:26:06.068 00:59:58 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:06.068 00:59:58 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:06.068 00:59:58 -- host/digest.sh@18 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:06.068 00:59:58 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:06.068 | .driver_specific 00:26:06.068 | .nvme_error 00:26:06.068 | .status_code 00:26:06.068 | .command_transient_transport_error' 00:26:06.328 00:59:58 -- host/digest.sh@71 -- # (( 462 > 0 )) 00:26:06.328 00:59:58 -- host/digest.sh@73 -- # killprocess 2906268 00:26:06.328 00:59:58 -- common/autotest_common.sh@936 -- # '[' -z 2906268 ']' 00:26:06.328 00:59:58 -- common/autotest_common.sh@940 -- # kill -0 2906268 00:26:06.328 00:59:58 -- common/autotest_common.sh@941 -- # uname 00:26:06.328 00:59:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.328 00:59:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2906268 00:26:06.328 00:59:58 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:06.328 00:59:58 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:06.328 00:59:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2906268' 00:26:06.328 killing process with pid 2906268 00:26:06.328 00:59:58 -- common/autotest_common.sh@955 -- # kill 2906268 00:26:06.328 Received shutdown signal, test time was about 2.000000 seconds 00:26:06.328 00:26:06.328 Latency(us) 00:26:06.328 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.328 =================================================================================================================== 00:26:06.328 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:06.328 00:59:58 -- common/autotest_common.sh@960 -- # wait 2906268 00:26:06.893 00:59:59 -- host/digest.sh@116 -- # killprocess 2903822 00:26:06.893 00:59:59 -- common/autotest_common.sh@936 -- # '[' -z 2903822 ']' 00:26:06.893 00:59:59 -- common/autotest_common.sh@940 -- # kill -0 2903822 00:26:06.893 00:59:59 -- common/autotest_common.sh@941 -- # uname 00:26:06.893 00:59:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:06.893 00:59:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2903822 00:26:06.893 00:59:59 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:06.893 00:59:59 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:06.893 00:59:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2903822' 00:26:06.893 killing process with pid 2903822 00:26:06.893 00:59:59 -- common/autotest_common.sh@955 -- # kill 2903822 00:26:06.893 00:59:59 -- common/autotest_common.sh@960 -- # wait 2903822 00:26:07.152 00:26:07.152 real 0m17.073s 00:26:07.152 user 0m32.557s 00:26:07.152 sys 0m3.654s 00:26:07.152 00:59:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:07.152 00:59:59 -- common/autotest_common.sh@10 -- # set +x 00:26:07.152 ************************************ 00:26:07.152 END TEST nvmf_digest_error 00:26:07.152 ************************************ 00:26:07.152 00:59:59 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:07.152 00:59:59 -- host/digest.sh@150 -- # nvmftestfini 00:26:07.152 00:59:59 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:07.152 00:59:59 -- nvmf/common.sh@117 -- # sync 00:26:07.410 00:59:59 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.410 00:59:59 -- nvmf/common.sh@120 -- # set +e 00:26:07.410 00:59:59 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.410 00:59:59 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.410 rmmod nvme_tcp 00:26:07.410 rmmod nvme_fabrics 00:26:07.410 rmmod nvme_keyring 00:26:07.410 00:59:59 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.410 00:59:59 -- nvmf/common.sh@124 -- # set -e 00:26:07.410 00:59:59 -- nvmf/common.sh@125 -- # return 0 00:26:07.410 00:59:59 -- nvmf/common.sh@478 -- # '[' -n 2903822 ']' 00:26:07.410 00:59:59 -- nvmf/common.sh@479 -- # killprocess 2903822 00:26:07.410 00:59:59 -- common/autotest_common.sh@936 -- # '[' -z 2903822 ']' 00:26:07.410 00:59:59 -- common/autotest_common.sh@940 -- # kill -0 2903822 00:26:07.410 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2903822) - No such process 00:26:07.410 00:59:59 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2903822 is not found' 00:26:07.410 Process with pid 2903822 is not found 00:26:07.410 00:59:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:07.410 00:59:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:07.410 00:59:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:07.410 00:59:59 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.410 00:59:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.410 00:59:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.410 00:59:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:07.410 00:59:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.318 01:00:01 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.318 00:26:09.318 real 1m44.304s 00:26:09.318 user 2m21.969s 00:26:09.318 sys 0m15.971s 00:26:09.318 01:00:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:09.318 01:00:01 -- common/autotest_common.sh@10 -- # set +x 00:26:09.318 ************************************ 00:26:09.318 END TEST nvmf_digest 00:26:09.318 ************************************ 00:26:09.318 01:00:01 -- nvmf/nvmf.sh@108 -- # [[ 0 -eq 1 ]] 00:26:09.318 01:00:01 -- nvmf/nvmf.sh@113 -- # [[ 0 -eq 1 ]] 00:26:09.318 01:00:01 -- nvmf/nvmf.sh@118 -- # [[ phy-fallback == phy ]] 00:26:09.318 01:00:01 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:26:09.318 01:00:01 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:09.318 01:00:01 -- common/autotest_common.sh@10 -- # set +x 00:26:09.574 01:00:02 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:26:09.574 00:26:09.574 real 15m45.349s 00:26:09.574 user 31m24.387s 00:26:09.574 sys 4m25.862s 00:26:09.574 01:00:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:09.574 01:00:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.574 ************************************ 00:26:09.574 END TEST nvmf_tcp 00:26:09.574 ************************************ 00:26:09.574 01:00:02 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:26:09.574 01:00:02 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:09.574 01:00:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:09.574 01:00:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:09.574 01:00:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.574 ************************************ 00:26:09.574 START TEST spdkcli_nvmf_tcp 00:26:09.574 ************************************ 00:26:09.574 01:00:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:26:09.574 * Looking for test storage... 00:26:09.574 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli 00:26:09.574 01:00:02 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/common.sh 00:26:09.574 01:00:02 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:26:09.574 01:00:02 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/dsa-phy-autotest/spdk/test/json_config/clear_config.py 00:26:09.574 01:00:02 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.574 01:00:02 -- nvmf/common.sh@7 -- # uname -s 00:26:09.574 01:00:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.574 01:00:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.574 01:00:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.574 01:00:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.574 01:00:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.574 01:00:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.575 01:00:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.575 01:00:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.575 01:00:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.575 01:00:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.575 01:00:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:26:09.575 01:00:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:26:09.575 01:00:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.575 01:00:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.575 01:00:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:09.575 01:00:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.575 01:00:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:09.575 01:00:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.575 01:00:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.575 01:00:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.575 01:00:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.575 01:00:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.575 01:00:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.575 01:00:02 -- paths/export.sh@5 -- # export PATH 00:26:09.575 01:00:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.575 01:00:02 -- nvmf/common.sh@47 -- # : 0 00:26:09.575 01:00:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.575 01:00:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.575 01:00:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.575 01:00:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.575 01:00:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.575 01:00:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.575 01:00:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.575 01:00:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.575 01:00:02 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:26:09.575 01:00:02 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:26:09.575 01:00:02 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:26:09.575 01:00:02 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:26:09.575 01:00:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:09.575 01:00:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 01:00:02 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:26:09.575 01:00:02 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2907751 00:26:09.575 01:00:02 -- spdkcli/common.sh@34 -- # waitforlisten 2907751 00:26:09.575 01:00:02 -- common/autotest_common.sh@817 -- # '[' -z 2907751 ']' 00:26:09.575 01:00:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:09.575 01:00:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:09.575 01:00:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:09.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:09.575 01:00:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:09.575 01:00:02 -- common/autotest_common.sh@10 -- # set +x 00:26:09.575 01:00:02 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:26:09.831 [2024-04-27 01:00:02.296453] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:09.831 [2024-04-27 01:00:02.296560] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2907751 ] 00:26:09.831 EAL: No free 2048 kB hugepages reported on node 1 00:26:09.831 [2024-04-27 01:00:02.414733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:09.831 [2024-04-27 01:00:02.511931] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.831 [2024-04-27 01:00:02.511938] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:10.397 01:00:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:10.397 01:00:02 -- common/autotest_common.sh@850 -- # return 0 00:26:10.397 01:00:02 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:26:10.397 01:00:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:10.397 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:26:10.397 01:00:03 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:26:10.397 01:00:03 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:26:10.397 01:00:03 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:26:10.397 01:00:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:10.397 01:00:03 -- common/autotest_common.sh@10 -- # set +x 00:26:10.397 01:00:03 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:26:10.397 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:26:10.397 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:26:10.397 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:26:10.397 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:26:10.397 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:26:10.397 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:26:10.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:10.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:10.397 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:26:10.397 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:26:10.397 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:26:10.397 ' 00:26:10.963 [2024-04-27 01:00:03.354078] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:12.867 [2024-04-27 01:00:05.413896] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.240 [2024-04-27 01:00:06.575563] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:26:16.143 [2024-04-27 01:00:08.706141] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:26:18.048 [2024-04-27 01:00:10.536337] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:26:19.502 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:26:19.502 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:26:19.502 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:19.502 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:19.502 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:26:19.502 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:26:19.502 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:26:19.502 01:00:12 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:26:19.502 01:00:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:19.502 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:19.502 01:00:12 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:26:19.502 01:00:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:19.502 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:19.502 01:00:12 -- spdkcli/nvmf.sh@69 -- # check_match 00:26:19.502 01:00:12 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:26:19.773 01:00:12 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:26:19.773 01:00:12 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:26:19.773 01:00:12 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:26:19.773 01:00:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:19.773 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:19.773 01:00:12 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:26:19.773 01:00:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:19.773 01:00:12 -- common/autotest_common.sh@10 -- # set +x 00:26:19.773 01:00:12 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:26:19.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:26:19.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:19.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:26:19.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:26:19.773 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:26:19.773 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:26:19.773 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:26:19.773 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:26:19.773 ' 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:26:25.051 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:26:25.051 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:26:25.051 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:26:25.051 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:26:25.051 01:00:17 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:26:25.051 01:00:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:25.051 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:25.051 01:00:17 -- spdkcli/nvmf.sh@90 -- # killprocess 2907751 00:26:25.051 01:00:17 -- common/autotest_common.sh@936 -- # '[' -z 2907751 ']' 00:26:25.051 01:00:17 -- common/autotest_common.sh@940 -- # kill -0 2907751 00:26:25.051 01:00:17 -- common/autotest_common.sh@941 -- # uname 00:26:25.051 01:00:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:25.051 01:00:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2907751 00:26:25.051 01:00:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:25.051 01:00:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:25.051 01:00:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2907751' 00:26:25.051 killing process with pid 2907751 00:26:25.051 01:00:17 -- common/autotest_common.sh@955 -- # kill 2907751 00:26:25.051 [2024-04-27 01:00:17.488230] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:25.051 01:00:17 -- common/autotest_common.sh@960 -- # wait 2907751 00:26:25.311 01:00:17 -- spdkcli/nvmf.sh@1 -- # cleanup 00:26:25.311 01:00:17 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:26:25.311 01:00:17 -- spdkcli/common.sh@13 -- # '[' -n 2907751 ']' 00:26:25.311 01:00:17 -- spdkcli/common.sh@14 -- # killprocess 2907751 00:26:25.311 01:00:17 -- common/autotest_common.sh@936 -- # '[' -z 2907751 ']' 00:26:25.311 01:00:17 -- common/autotest_common.sh@940 -- # kill -0 2907751 00:26:25.311 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2907751) - No such process 00:26:25.311 01:00:17 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2907751 is not found' 00:26:25.311 Process with pid 2907751 is not found 00:26:25.311 01:00:17 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:26:25.311 01:00:17 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:26:25.311 01:00:17 -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/dsa-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:26:25.311 00:26:25.311 real 0m15.834s 00:26:25.311 user 0m31.904s 00:26:25.311 sys 0m0.781s 00:26:25.311 01:00:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:25.311 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:25.311 ************************************ 00:26:25.311 END TEST spdkcli_nvmf_tcp 00:26:25.311 ************************************ 00:26:25.311 01:00:17 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:25.311 01:00:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:25.311 01:00:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:25.311 01:00:17 -- common/autotest_common.sh@10 -- # set +x 00:26:25.570 ************************************ 00:26:25.570 START TEST nvmf_identify_passthru 00:26:25.570 ************************************ 00:26:25.570 01:00:18 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:26:25.570 * Looking for test storage... 00:26:25.570 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:25.570 01:00:18 -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:25.570 01:00:18 -- nvmf/common.sh@7 -- # uname -s 00:26:25.570 01:00:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:25.570 01:00:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:25.570 01:00:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:25.570 01:00:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:25.570 01:00:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:25.570 01:00:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:25.570 01:00:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:25.570 01:00:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:25.570 01:00:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:25.570 01:00:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:25.570 01:00:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:26:25.570 01:00:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:26:25.570 01:00:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:25.570 01:00:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:25.570 01:00:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:25.570 01:00:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:25.570 01:00:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:25.570 01:00:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.570 01:00:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.570 01:00:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.570 01:00:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.570 01:00:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- paths/export.sh@5 -- # export PATH 00:26:25.571 01:00:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- nvmf/common.sh@47 -- # : 0 00:26:25.571 01:00:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:25.571 01:00:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:25.571 01:00:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:25.571 01:00:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:25.571 01:00:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:25.571 01:00:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:25.571 01:00:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:25.571 01:00:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:25.571 01:00:18 -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:25.571 01:00:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:25.571 01:00:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:25.571 01:00:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:25.571 01:00:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- paths/export.sh@5 -- # export PATH 00:26:25.571 01:00:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:25.571 01:00:18 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:26:25.571 01:00:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:25.571 01:00:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:25.571 01:00:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:25.571 01:00:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:25.571 01:00:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:25.571 01:00:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:25.571 01:00:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:25.571 01:00:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:25.571 01:00:18 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:26:25.571 01:00:18 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:25.571 01:00:18 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:25.571 01:00:18 -- common/autotest_common.sh@10 -- # set +x 00:26:30.850 01:00:23 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:30.850 01:00:23 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:30.850 01:00:23 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:30.850 01:00:23 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:30.850 01:00:23 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:30.850 01:00:23 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:30.850 01:00:23 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:30.850 01:00:23 -- nvmf/common.sh@295 -- # net_devs=() 00:26:30.850 01:00:23 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:30.850 01:00:23 -- nvmf/common.sh@296 -- # e810=() 00:26:30.850 01:00:23 -- nvmf/common.sh@296 -- # local -ga e810 00:26:30.850 01:00:23 -- nvmf/common.sh@297 -- # x722=() 00:26:30.850 01:00:23 -- nvmf/common.sh@297 -- # local -ga x722 00:26:30.850 01:00:23 -- nvmf/common.sh@298 -- # mlx=() 00:26:30.850 01:00:23 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:30.850 01:00:23 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:30.850 01:00:23 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:30.850 01:00:23 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:30.850 01:00:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.850 01:00:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:30.850 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:30.850 01:00:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:30.850 01:00:23 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:30.850 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:30.850 01:00:23 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:30.850 01:00:23 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:30.851 01:00:23 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.851 01:00:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.851 01:00:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:30.851 01:00:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.851 01:00:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:30.851 Found net devices under 0000:27:00.0: cvl_0_0 00:26:30.851 01:00:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.851 01:00:23 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:30.851 01:00:23 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:30.851 01:00:23 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:30.851 01:00:23 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:30.851 01:00:23 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:30.851 Found net devices under 0000:27:00.1: cvl_0_1 00:26:30.851 01:00:23 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:30.851 01:00:23 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:30.851 01:00:23 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:30.851 01:00:23 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:30.851 01:00:23 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:30.851 01:00:23 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:30.851 01:00:23 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:30.851 01:00:23 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:30.851 01:00:23 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:30.851 01:00:23 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:30.851 01:00:23 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:30.851 01:00:23 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:30.851 01:00:23 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:30.851 01:00:23 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:30.851 01:00:23 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:30.851 01:00:23 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:30.851 01:00:23 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:30.851 01:00:23 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:30.851 01:00:23 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:30.851 01:00:23 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:30.851 01:00:23 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:30.851 01:00:23 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:30.851 01:00:23 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:30.851 01:00:23 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:30.851 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:30.851 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:26:30.851 00:26:30.851 --- 10.0.0.2 ping statistics --- 00:26:30.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.851 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:26:30.851 01:00:23 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:30.851 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:30.851 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.060 ms 00:26:30.851 00:26:30.851 --- 10.0.0.1 ping statistics --- 00:26:30.851 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:30.851 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:26:30.851 01:00:23 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:30.851 01:00:23 -- nvmf/common.sh@411 -- # return 0 00:26:30.851 01:00:23 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:30.851 01:00:23 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:30.851 01:00:23 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:30.851 01:00:23 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:30.851 01:00:23 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:30.851 01:00:23 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:30.851 01:00:23 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:26:30.851 01:00:23 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:30.851 01:00:23 -- common/autotest_common.sh@10 -- # set +x 00:26:30.851 01:00:23 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:26:30.851 01:00:23 -- common/autotest_common.sh@1510 -- # bdfs=() 00:26:30.851 01:00:23 -- common/autotest_common.sh@1510 -- # local bdfs 00:26:30.851 01:00:23 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:26:30.851 01:00:23 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:26:30.851 01:00:23 -- common/autotest_common.sh@1499 -- # bdfs=() 00:26:30.851 01:00:23 -- common/autotest_common.sh@1499 -- # local bdfs 00:26:30.851 01:00:23 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:26:30.851 01:00:23 -- common/autotest_common.sh@1500 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/gen_nvme.sh 00:26:30.851 01:00:23 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:26:30.851 01:00:23 -- common/autotest_common.sh@1501 -- # (( 3 == 0 )) 00:26:30.851 01:00:23 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:26:30.851 01:00:23 -- common/autotest_common.sh@1513 -- # echo 0000:c9:00.0 00:26:30.851 01:00:23 -- target/identify_passthru.sh@16 -- # bdf=0000:c9:00.0 00:26:30.851 01:00:23 -- target/identify_passthru.sh@17 -- # '[' -z 0000:c9:00.0 ']' 00:26:30.851 01:00:23 -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:26:30.851 01:00:23 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:26:30.851 01:00:23 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:26:30.851 EAL: No free 2048 kB hugepages reported on node 1 00:26:36.153 01:00:28 -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ941200WG2P0BGN 00:26:36.153 01:00:28 -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:c9:00.0' -i 0 00:26:36.153 01:00:28 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:26:36.153 01:00:28 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:26:36.153 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.444 01:00:33 -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:26:41.444 01:00:33 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:26:41.444 01:00:33 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:41.444 01:00:33 -- common/autotest_common.sh@10 -- # set +x 00:26:41.444 01:00:33 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:26:41.444 01:00:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:41.444 01:00:33 -- common/autotest_common.sh@10 -- # set +x 00:26:41.444 01:00:33 -- target/identify_passthru.sh@31 -- # nvmfpid=2916876 00:26:41.444 01:00:33 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.444 01:00:33 -- target/identify_passthru.sh@35 -- # waitforlisten 2916876 00:26:41.444 01:00:33 -- common/autotest_common.sh@817 -- # '[' -z 2916876 ']' 00:26:41.444 01:00:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.444 01:00:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:41.444 01:00:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.444 01:00:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:41.444 01:00:33 -- common/autotest_common.sh@10 -- # set +x 00:26:41.444 01:00:33 -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:26:41.444 [2024-04-27 01:00:33.933309] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:41.444 [2024-04-27 01:00:33.933421] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.444 EAL: No free 2048 kB hugepages reported on node 1 00:26:41.444 [2024-04-27 01:00:34.055029] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.704 [2024-04-27 01:00:34.155070] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.704 [2024-04-27 01:00:34.155110] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.704 [2024-04-27 01:00:34.155121] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.704 [2024-04-27 01:00:34.155132] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.704 [2024-04-27 01:00:34.155139] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.704 [2024-04-27 01:00:34.155266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.704 [2024-04-27 01:00:34.155271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.704 [2024-04-27 01:00:34.155407] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.704 [2024-04-27 01:00:34.155416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.961 01:00:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:41.961 01:00:34 -- common/autotest_common.sh@850 -- # return 0 00:26:41.961 01:00:34 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:26:41.961 01:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.961 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:26:41.961 INFO: Log level set to 20 00:26:41.961 INFO: Requests: 00:26:41.961 { 00:26:41.961 "jsonrpc": "2.0", 00:26:41.961 "method": "nvmf_set_config", 00:26:41.961 "id": 1, 00:26:41.961 "params": { 00:26:41.961 "admin_cmd_passthru": { 00:26:41.961 "identify_ctrlr": true 00:26:41.961 } 00:26:41.961 } 00:26:41.961 } 00:26:41.961 00:26:41.961 INFO: response: 00:26:41.961 { 00:26:41.961 "jsonrpc": "2.0", 00:26:41.961 "id": 1, 00:26:41.961 "result": true 00:26:41.961 } 00:26:41.961 00:26:41.961 01:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:41.961 01:00:34 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:26:41.961 01:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:41.961 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:26:41.961 INFO: Setting log level to 20 00:26:41.961 INFO: Setting log level to 20 00:26:41.961 INFO: Log level set to 20 00:26:41.961 INFO: Log level set to 20 00:26:41.961 INFO: Requests: 00:26:41.961 { 00:26:41.961 "jsonrpc": "2.0", 00:26:41.961 "method": "framework_start_init", 00:26:41.961 "id": 1 00:26:41.961 } 00:26:41.961 00:26:41.961 INFO: Requests: 00:26:41.961 { 00:26:41.962 "jsonrpc": "2.0", 00:26:41.962 "method": "framework_start_init", 00:26:41.962 "id": 1 00:26:41.962 } 00:26:41.962 00:26:42.219 [2024-04-27 01:00:34.807906] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:26:42.219 INFO: response: 00:26:42.219 { 00:26:42.219 "jsonrpc": "2.0", 00:26:42.219 "id": 1, 00:26:42.219 "result": true 00:26:42.219 } 00:26:42.219 00:26:42.219 INFO: response: 00:26:42.219 { 00:26:42.219 "jsonrpc": "2.0", 00:26:42.219 "id": 1, 00:26:42.219 "result": true 00:26:42.219 } 00:26:42.219 00:26:42.219 01:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.219 01:00:34 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:42.219 01:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.219 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:26:42.219 INFO: Setting log level to 40 00:26:42.219 INFO: Setting log level to 40 00:26:42.220 INFO: Setting log level to 40 00:26:42.220 [2024-04-27 01:00:34.822321] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:42.220 01:00:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:42.220 01:00:34 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:26:42.220 01:00:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:42.220 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:26:42.220 01:00:34 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:c9:00.0 00:26:42.220 01:00:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:42.220 01:00:34 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 Nvme0n1 00:26:45.516 01:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.516 01:00:37 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:26:45.516 01:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.516 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 01:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.516 01:00:37 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:45.516 01:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.516 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 01:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.516 01:00:37 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:45.516 01:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.516 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 [2024-04-27 01:00:37.731671] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:45.516 01:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.516 01:00:37 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:26:45.516 01:00:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.516 01:00:37 -- common/autotest_common.sh@10 -- # set +x 00:26:45.516 [2024-04-27 01:00:37.739381] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:45.516 [ 00:26:45.516 { 00:26:45.516 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:45.516 "subtype": "Discovery", 00:26:45.516 "listen_addresses": [], 00:26:45.516 "allow_any_host": true, 00:26:45.516 "hosts": [] 00:26:45.516 }, 00:26:45.516 { 00:26:45.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:45.516 "subtype": "NVMe", 00:26:45.516 "listen_addresses": [ 00:26:45.516 { 00:26:45.516 "transport": "TCP", 00:26:45.516 "trtype": "TCP", 00:26:45.516 "adrfam": "IPv4", 00:26:45.516 "traddr": "10.0.0.2", 00:26:45.516 "trsvcid": "4420" 00:26:45.516 } 00:26:45.516 ], 00:26:45.516 "allow_any_host": true, 00:26:45.516 "hosts": [], 00:26:45.516 "serial_number": "SPDK00000000000001", 00:26:45.516 "model_number": "SPDK bdev Controller", 00:26:45.516 "max_namespaces": 1, 00:26:45.516 "min_cntlid": 1, 00:26:45.516 "max_cntlid": 65519, 00:26:45.516 "namespaces": [ 00:26:45.516 { 00:26:45.516 "nsid": 1, 00:26:45.516 "bdev_name": "Nvme0n1", 00:26:45.516 "name": "Nvme0n1", 00:26:45.516 "nguid": "34E7EE546FB1466BACB597B1CD961D77", 00:26:45.516 "uuid": "34e7ee54-6fb1-466b-acb5-97b1cd961d77" 00:26:45.516 } 00:26:45.516 ] 00:26:45.516 } 00:26:45.516 ] 00:26:45.516 01:00:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.516 01:00:37 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:26:45.516 01:00:37 -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.516 01:00:37 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:26:45.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.516 01:00:38 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ941200WG2P0BGN 00:26:45.516 01:00:38 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:26:45.516 01:00:38 -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:26:45.516 01:00:38 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:26:45.516 EAL: No free 2048 kB hugepages reported on node 1 00:26:45.778 01:00:38 -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:26:45.778 01:00:38 -- target/identify_passthru.sh@63 -- # '[' PHLJ941200WG2P0BGN '!=' PHLJ941200WG2P0BGN ']' 00:26:45.778 01:00:38 -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:26:45.778 01:00:38 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:45.778 01:00:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:45.778 01:00:38 -- common/autotest_common.sh@10 -- # set +x 00:26:45.778 01:00:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:45.778 01:00:38 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:26:45.778 01:00:38 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:26:45.778 01:00:38 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:45.778 01:00:38 -- nvmf/common.sh@117 -- # sync 00:26:45.778 01:00:38 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:45.778 01:00:38 -- nvmf/common.sh@120 -- # set +e 00:26:45.778 01:00:38 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:45.778 01:00:38 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:45.778 rmmod nvme_tcp 00:26:45.778 rmmod nvme_fabrics 00:26:45.778 rmmod nvme_keyring 00:26:45.778 01:00:38 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:45.778 01:00:38 -- nvmf/common.sh@124 -- # set -e 00:26:45.778 01:00:38 -- nvmf/common.sh@125 -- # return 0 00:26:45.778 01:00:38 -- nvmf/common.sh@478 -- # '[' -n 2916876 ']' 00:26:45.778 01:00:38 -- nvmf/common.sh@479 -- # killprocess 2916876 00:26:45.778 01:00:38 -- common/autotest_common.sh@936 -- # '[' -z 2916876 ']' 00:26:45.778 01:00:38 -- common/autotest_common.sh@940 -- # kill -0 2916876 00:26:45.778 01:00:38 -- common/autotest_common.sh@941 -- # uname 00:26:45.778 01:00:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:45.778 01:00:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2916876 00:26:45.778 01:00:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:45.778 01:00:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:45.778 01:00:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2916876' 00:26:45.778 killing process with pid 2916876 00:26:45.778 01:00:38 -- common/autotest_common.sh@955 -- # kill 2916876 00:26:45.778 [2024-04-27 01:00:38.426314] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:45.778 01:00:38 -- common/autotest_common.sh@960 -- # wait 2916876 00:26:49.065 01:00:41 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:49.065 01:00:41 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:49.065 01:00:41 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:49.065 01:00:41 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:49.065 01:00:41 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:49.065 01:00:41 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.065 01:00:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:49.065 01:00:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.999 01:00:43 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.999 00:26:50.999 real 0m25.127s 00:26:50.999 user 0m36.744s 00:26:50.999 sys 0m4.888s 00:26:50.999 01:00:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:50.999 01:00:43 -- common/autotest_common.sh@10 -- # set +x 00:26:50.999 ************************************ 00:26:50.999 END TEST nvmf_identify_passthru 00:26:50.999 ************************************ 00:26:50.999 01:00:43 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:50.999 01:00:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:26:50.999 01:00:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:50.999 01:00:43 -- common/autotest_common.sh@10 -- # set +x 00:26:50.999 ************************************ 00:26:50.999 START TEST nvmf_dif 00:26:50.999 ************************************ 00:26:50.999 01:00:43 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/dif.sh 00:26:50.999 * Looking for test storage... 00:26:50.999 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:26:50.999 01:00:43 -- target/dif.sh@13 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.999 01:00:43 -- nvmf/common.sh@7 -- # uname -s 00:26:50.999 01:00:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.999 01:00:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.999 01:00:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.999 01:00:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.999 01:00:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.999 01:00:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.999 01:00:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.999 01:00:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.999 01:00:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.999 01:00:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.999 01:00:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:26:50.999 01:00:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:26:50.999 01:00:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.999 01:00:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.999 01:00:43 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:26:50.999 01:00:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.999 01:00:43 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:26:50.999 01:00:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.999 01:00:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.999 01:00:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.999 01:00:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.000 01:00:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.000 01:00:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.000 01:00:43 -- paths/export.sh@5 -- # export PATH 00:26:51.000 01:00:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:51.000 01:00:43 -- nvmf/common.sh@47 -- # : 0 00:26:51.000 01:00:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:51.000 01:00:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:51.000 01:00:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:51.000 01:00:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:51.000 01:00:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:51.000 01:00:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:51.000 01:00:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:51.000 01:00:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:51.000 01:00:43 -- target/dif.sh@15 -- # NULL_META=16 00:26:51.000 01:00:43 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:26:51.000 01:00:43 -- target/dif.sh@15 -- # NULL_SIZE=64 00:26:51.000 01:00:43 -- target/dif.sh@15 -- # NULL_DIF=1 00:26:51.000 01:00:43 -- target/dif.sh@135 -- # nvmftestinit 00:26:51.000 01:00:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:51.000 01:00:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:51.000 01:00:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:51.000 01:00:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:51.000 01:00:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:51.000 01:00:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:51.000 01:00:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:51.000 01:00:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:51.000 01:00:43 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:26:51.000 01:00:43 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:26:51.000 01:00:43 -- nvmf/common.sh@285 -- # xtrace_disable 00:26:51.000 01:00:43 -- common/autotest_common.sh@10 -- # set +x 00:26:56.286 01:00:48 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:26:56.286 01:00:48 -- nvmf/common.sh@291 -- # pci_devs=() 00:26:56.286 01:00:48 -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:56.286 01:00:48 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:56.286 01:00:48 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:56.286 01:00:48 -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:56.286 01:00:48 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:56.286 01:00:48 -- nvmf/common.sh@295 -- # net_devs=() 00:26:56.286 01:00:48 -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:56.286 01:00:48 -- nvmf/common.sh@296 -- # e810=() 00:26:56.286 01:00:48 -- nvmf/common.sh@296 -- # local -ga e810 00:26:56.286 01:00:48 -- nvmf/common.sh@297 -- # x722=() 00:26:56.286 01:00:48 -- nvmf/common.sh@297 -- # local -ga x722 00:26:56.286 01:00:48 -- nvmf/common.sh@298 -- # mlx=() 00:26:56.286 01:00:48 -- nvmf/common.sh@298 -- # local -ga mlx 00:26:56.286 01:00:48 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:56.286 01:00:48 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:56.286 01:00:48 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:56.286 01:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.286 01:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:26:56.286 Found 0000:27:00.0 (0x8086 - 0x159b) 00:26:56.286 01:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:56.286 01:00:48 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:26:56.286 Found 0000:27:00.1 (0x8086 - 0x159b) 00:26:56.286 01:00:48 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:56.286 01:00:48 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:26:56.286 01:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.286 01:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.286 01:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:56.286 01:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.287 01:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:26:56.287 Found net devices under 0000:27:00.0: cvl_0_0 00:26:56.287 01:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.287 01:00:48 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:56.287 01:00:48 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:56.287 01:00:48 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:26:56.287 01:00:48 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:56.287 01:00:48 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:26:56.287 Found net devices under 0000:27:00.1: cvl_0_1 00:26:56.287 01:00:48 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:26:56.287 01:00:48 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:26:56.287 01:00:48 -- nvmf/common.sh@403 -- # is_hw=yes 00:26:56.287 01:00:48 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:26:56.287 01:00:48 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:26:56.287 01:00:48 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:26:56.287 01:00:48 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:56.287 01:00:48 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:56.287 01:00:48 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:56.287 01:00:48 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:56.287 01:00:48 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:56.287 01:00:48 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:56.287 01:00:48 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:56.287 01:00:48 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:56.287 01:00:48 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:56.287 01:00:48 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:56.287 01:00:48 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:56.287 01:00:48 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:56.287 01:00:48 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:56.545 01:00:49 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:56.545 01:00:49 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:56.545 01:00:49 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:56.545 01:00:49 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:56.545 01:00:49 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:56.545 01:00:49 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:56.545 01:00:49 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:56.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:56.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:26:56.545 00:26:56.545 --- 10.0.0.2 ping statistics --- 00:26:56.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.545 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:26:56.545 01:00:49 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:56.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:56.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:26:56.545 00:26:56.545 --- 10.0.0.1 ping statistics --- 00:26:56.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:56.545 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:26:56.545 01:00:49 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:56.545 01:00:49 -- nvmf/common.sh@411 -- # return 0 00:26:56.545 01:00:49 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:26:56.545 01:00:49 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:26:59.077 0000:74:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:c9:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:59.077 0000:f1:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:79:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:cb:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:59.077 0000:6f:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:6f:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:f6:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:f6:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:74:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:6a:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:79:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:ec:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:6a:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:ec:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:ca:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:59.077 0000:e7:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 0000:e7:02.0 (8086 0cfe): Already using the vfio-pci driver 00:26:59.077 0000:f1:01.0 (8086 0b25): Already using the vfio-pci driver 00:26:59.077 01:00:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:59.077 01:00:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:59.077 01:00:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:59.077 01:00:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:59.077 01:00:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:59.077 01:00:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:59.077 01:00:51 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:26:59.077 01:00:51 -- target/dif.sh@137 -- # nvmfappstart 00:26:59.077 01:00:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:59.077 01:00:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:59.077 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.077 01:00:51 -- nvmf/common.sh@470 -- # nvmfpid=2923525 00:26:59.077 01:00:51 -- nvmf/common.sh@471 -- # waitforlisten 2923525 00:26:59.077 01:00:51 -- common/autotest_common.sh@817 -- # '[' -z 2923525 ']' 00:26:59.077 01:00:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:59.077 01:00:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:59.077 01:00:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:59.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:59.077 01:00:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:59.077 01:00:51 -- common/autotest_common.sh@10 -- # set +x 00:26:59.077 01:00:51 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:26:59.337 [2024-04-27 01:00:51.830244] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:26:59.337 [2024-04-27 01:00:51.830338] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:59.337 EAL: No free 2048 kB hugepages reported on node 1 00:26:59.337 [2024-04-27 01:00:51.950789] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:59.597 [2024-04-27 01:00:52.048601] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:59.597 [2024-04-27 01:00:52.048640] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:59.597 [2024-04-27 01:00:52.048650] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:59.597 [2024-04-27 01:00:52.048661] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:59.597 [2024-04-27 01:00:52.048670] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:59.597 [2024-04-27 01:00:52.048709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:59.855 01:00:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:59.856 01:00:52 -- common/autotest_common.sh@850 -- # return 0 00:26:59.856 01:00:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:59.856 01:00:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:59.856 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 01:00:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.115 01:00:52 -- target/dif.sh@139 -- # create_transport 00:27:00.115 01:00:52 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:27:00.115 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 [2024-04-27 01:00:52.563881] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.115 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.115 01:00:52 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:27:00.115 01:00:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:00.115 01:00:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 ************************************ 00:27:00.115 START TEST fio_dif_1_default 00:27:00.115 ************************************ 00:27:00.115 01:00:52 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:27:00.115 01:00:52 -- target/dif.sh@86 -- # create_subsystems 0 00:27:00.115 01:00:52 -- target/dif.sh@28 -- # local sub 00:27:00.115 01:00:52 -- target/dif.sh@30 -- # for sub in "$@" 00:27:00.115 01:00:52 -- target/dif.sh@31 -- # create_subsystem 0 00:27:00.115 01:00:52 -- target/dif.sh@18 -- # local sub_id=0 00:27:00.115 01:00:52 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:00.115 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 bdev_null0 00:27:00.115 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.115 01:00:52 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:00.115 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.115 01:00:52 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:00.115 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.115 01:00:52 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:00.115 01:00:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:00.115 01:00:52 -- common/autotest_common.sh@10 -- # set +x 00:27:00.115 [2024-04-27 01:00:52.692070] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.115 01:00:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:00.115 01:00:52 -- target/dif.sh@87 -- # fio /dev/fd/62 00:27:00.115 01:00:52 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:00.115 01:00:52 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:00.115 01:00:52 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:00.115 01:00:52 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:00.115 01:00:52 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:00.115 01:00:52 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:00.115 01:00:52 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:27:00.115 01:00:52 -- common/autotest_common.sh@1327 -- # shift 00:27:00.115 01:00:52 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:00.115 01:00:52 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:00.115 01:00:52 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:00.115 01:00:52 -- nvmf/common.sh@521 -- # config=() 00:27:00.115 01:00:52 -- nvmf/common.sh@521 -- # local subsystem config 00:27:00.115 01:00:52 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:00.115 01:00:52 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:00.115 { 00:27:00.115 "params": { 00:27:00.115 "name": "Nvme$subsystem", 00:27:00.115 "trtype": "$TEST_TRANSPORT", 00:27:00.115 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:00.115 "adrfam": "ipv4", 00:27:00.115 "trsvcid": "$NVMF_PORT", 00:27:00.115 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:00.115 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:00.115 "hdgst": ${hdgst:-false}, 00:27:00.115 "ddgst": ${ddgst:-false} 00:27:00.115 }, 00:27:00.115 "method": "bdev_nvme_attach_controller" 00:27:00.115 } 00:27:00.115 EOF 00:27:00.115 )") 00:27:00.115 01:00:52 -- target/dif.sh@82 -- # gen_fio_conf 00:27:00.115 01:00:52 -- target/dif.sh@54 -- # local file 00:27:00.115 01:00:52 -- target/dif.sh@56 -- # cat 00:27:00.115 01:00:52 -- nvmf/common.sh@543 -- # cat 00:27:00.115 01:00:52 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:00.115 01:00:52 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:00.115 01:00:52 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:00.115 01:00:52 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:00.115 01:00:52 -- target/dif.sh@72 -- # (( file <= files )) 00:27:00.115 01:00:52 -- nvmf/common.sh@545 -- # jq . 00:27:00.115 01:00:52 -- nvmf/common.sh@546 -- # IFS=, 00:27:00.115 01:00:52 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:00.115 "params": { 00:27:00.115 "name": "Nvme0", 00:27:00.115 "trtype": "tcp", 00:27:00.115 "traddr": "10.0.0.2", 00:27:00.115 "adrfam": "ipv4", 00:27:00.115 "trsvcid": "4420", 00:27:00.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:00.116 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:00.116 "hdgst": false, 00:27:00.116 "ddgst": false 00:27:00.116 }, 00:27:00.116 "method": "bdev_nvme_attach_controller" 00:27:00.116 }' 00:27:00.116 01:00:52 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:00.116 01:00:52 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:00.116 01:00:52 -- common/autotest_common.sh@1333 -- # break 00:27:00.116 01:00:52 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:00.116 01:00:52 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:00.696 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:00.696 fio-3.35 00:27:00.696 Starting 1 thread 00:27:00.696 EAL: No free 2048 kB hugepages reported on node 1 00:27:12.874 00:27:12.874 filename0: (groupid=0, jobs=1): err= 0: pid=2924232: Sat Apr 27 01:01:03 2024 00:27:12.874 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:27:12.874 slat (nsec): min=5791, max=25031, avg=7418.79, stdev=2209.26 00:27:12.874 clat (usec): min=40792, max=43894, avg=40991.13, stdev=192.93 00:27:12.874 lat (usec): min=40798, max=43919, avg=40998.54, stdev=193.15 00:27:12.874 clat percentiles (usec): 00:27:12.874 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:27:12.874 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:12.874 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:12.874 | 99.00th=[41157], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:27:12.874 | 99.99th=[43779] 00:27:12.874 bw ( KiB/s): min= 384, max= 416, per=99.46%, avg=388.80, stdev=11.72, samples=20 00:27:12.874 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:12.874 lat (msec) : 50=100.00% 00:27:12.874 cpu : usr=95.82%, sys=3.89%, ctx=14, majf=0, minf=1634 00:27:12.874 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:12.874 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.874 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:12.874 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:12.874 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:12.875 00:27:12.875 Run status group 0 (all jobs): 00:27:12.875 READ: bw=390KiB/s (399kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=3904KiB (3998kB), run=10007-10007msec 00:27:12.875 ----------------------------------------------------- 00:27:12.875 Suppressions used: 00:27:12.875 count bytes template 00:27:12.875 1 8 /usr/src/fio/parse.c 00:27:12.875 1 8 libtcmalloc_minimal.so 00:27:12.875 1 904 libcrypto.so 00:27:12.875 ----------------------------------------------------- 00:27:12.875 00:27:12.875 01:01:04 -- target/dif.sh@88 -- # destroy_subsystems 0 00:27:12.875 01:01:04 -- target/dif.sh@43 -- # local sub 00:27:12.875 01:01:04 -- target/dif.sh@45 -- # for sub in "$@" 00:27:12.875 01:01:04 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:12.875 01:01:04 -- target/dif.sh@36 -- # local sub_id=0 00:27:12.875 01:01:04 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 00:27:12.875 real 0m11.808s 00:27:12.875 user 0m25.498s 00:27:12.875 sys 0m0.868s 00:27:12.875 01:01:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 ************************************ 00:27:12.875 END TEST fio_dif_1_default 00:27:12.875 ************************************ 00:27:12.875 01:01:04 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:27:12.875 01:01:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:12.875 01:01:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 ************************************ 00:27:12.875 START TEST fio_dif_1_multi_subsystems 00:27:12.875 ************************************ 00:27:12.875 01:01:04 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:27:12.875 01:01:04 -- target/dif.sh@92 -- # local files=1 00:27:12.875 01:01:04 -- target/dif.sh@94 -- # create_subsystems 0 1 00:27:12.875 01:01:04 -- target/dif.sh@28 -- # local sub 00:27:12.875 01:01:04 -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.875 01:01:04 -- target/dif.sh@31 -- # create_subsystem 0 00:27:12.875 01:01:04 -- target/dif.sh@18 -- # local sub_id=0 00:27:12.875 01:01:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 bdev_null0 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 [2024-04-27 01:01:04.629005] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@30 -- # for sub in "$@" 00:27:12.875 01:01:04 -- target/dif.sh@31 -- # create_subsystem 1 00:27:12.875 01:01:04 -- target/dif.sh@18 -- # local sub_id=1 00:27:12.875 01:01:04 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 bdev_null1 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:12.875 01:01:04 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:12.875 01:01:04 -- common/autotest_common.sh@10 -- # set +x 00:27:12.875 01:01:04 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:12.875 01:01:04 -- target/dif.sh@95 -- # fio /dev/fd/62 00:27:12.875 01:01:04 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.875 01:01:04 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.875 01:01:04 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:12.875 01:01:04 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:12.875 01:01:04 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:12.875 01:01:04 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:12.875 01:01:04 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:27:12.875 01:01:04 -- common/autotest_common.sh@1327 -- # shift 00:27:12.875 01:01:04 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:12.875 01:01:04 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:12.875 01:01:04 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:12.875 01:01:04 -- nvmf/common.sh@521 -- # config=() 00:27:12.875 01:01:04 -- target/dif.sh@82 -- # gen_fio_conf 00:27:12.875 01:01:04 -- nvmf/common.sh@521 -- # local subsystem config 00:27:12.875 01:01:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:12.875 01:01:04 -- target/dif.sh@54 -- # local file 00:27:12.875 01:01:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:12.875 { 00:27:12.875 "params": { 00:27:12.875 "name": "Nvme$subsystem", 00:27:12.875 "trtype": "$TEST_TRANSPORT", 00:27:12.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.875 "adrfam": "ipv4", 00:27:12.875 "trsvcid": "$NVMF_PORT", 00:27:12.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.875 "hdgst": ${hdgst:-false}, 00:27:12.875 "ddgst": ${ddgst:-false} 00:27:12.875 }, 00:27:12.875 "method": "bdev_nvme_attach_controller" 00:27:12.875 } 00:27:12.875 EOF 00:27:12.875 )") 00:27:12.875 01:01:04 -- target/dif.sh@56 -- # cat 00:27:12.875 01:01:04 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:12.875 01:01:04 -- nvmf/common.sh@543 -- # cat 00:27:12.875 01:01:04 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:12.875 01:01:04 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:12.875 01:01:04 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:12.875 01:01:04 -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.875 01:01:04 -- target/dif.sh@73 -- # cat 00:27:12.875 01:01:04 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:12.875 01:01:04 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:12.875 { 00:27:12.875 "params": { 00:27:12.875 "name": "Nvme$subsystem", 00:27:12.875 "trtype": "$TEST_TRANSPORT", 00:27:12.875 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:12.875 "adrfam": "ipv4", 00:27:12.875 "trsvcid": "$NVMF_PORT", 00:27:12.875 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:12.875 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:12.875 "hdgst": ${hdgst:-false}, 00:27:12.875 "ddgst": ${ddgst:-false} 00:27:12.875 }, 00:27:12.875 "method": "bdev_nvme_attach_controller" 00:27:12.875 } 00:27:12.875 EOF 00:27:12.875 )") 00:27:12.875 01:01:04 -- target/dif.sh@72 -- # (( file++ )) 00:27:12.875 01:01:04 -- target/dif.sh@72 -- # (( file <= files )) 00:27:12.875 01:01:04 -- nvmf/common.sh@543 -- # cat 00:27:12.875 01:01:04 -- nvmf/common.sh@545 -- # jq . 00:27:12.875 01:01:04 -- nvmf/common.sh@546 -- # IFS=, 00:27:12.875 01:01:04 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:12.875 "params": { 00:27:12.875 "name": "Nvme0", 00:27:12.875 "trtype": "tcp", 00:27:12.875 "traddr": "10.0.0.2", 00:27:12.875 "adrfam": "ipv4", 00:27:12.875 "trsvcid": "4420", 00:27:12.875 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:12.875 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:12.875 "hdgst": false, 00:27:12.875 "ddgst": false 00:27:12.875 }, 00:27:12.875 "method": "bdev_nvme_attach_controller" 00:27:12.875 },{ 00:27:12.875 "params": { 00:27:12.875 "name": "Nvme1", 00:27:12.875 "trtype": "tcp", 00:27:12.875 "traddr": "10.0.0.2", 00:27:12.875 "adrfam": "ipv4", 00:27:12.875 "trsvcid": "4420", 00:27:12.875 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:12.875 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:12.875 "hdgst": false, 00:27:12.875 "ddgst": false 00:27:12.875 }, 00:27:12.875 "method": "bdev_nvme_attach_controller" 00:27:12.875 }' 00:27:12.875 01:01:04 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:12.875 01:01:04 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:12.875 01:01:04 -- common/autotest_common.sh@1333 -- # break 00:27:12.876 01:01:04 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:12.876 01:01:04 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:12.876 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:12.876 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:27:12.876 fio-3.35 00:27:12.876 Starting 2 threads 00:27:12.876 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.056 00:27:25.056 filename0: (groupid=0, jobs=1): err= 0: pid=2926666: Sat Apr 27 01:01:16 2024 00:27:25.056 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10004msec) 00:27:25.056 slat (usec): min=5, max=101, avg= 7.15, stdev= 2.96 00:27:25.056 clat (usec): min=450, max=42611, avg=21085.43, stdev=20450.69 00:27:25.056 lat (usec): min=457, max=42617, avg=21092.57, stdev=20450.35 00:27:25.056 clat percentiles (usec): 00:27:25.056 | 1.00th=[ 498], 5.00th=[ 523], 10.00th=[ 529], 20.00th=[ 537], 00:27:25.056 | 30.00th=[ 545], 40.00th=[ 578], 50.00th=[41157], 60.00th=[41157], 00:27:25.056 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:27:25.056 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:27:25.056 | 99.99th=[42730] 00:27:25.056 bw ( KiB/s): min= 672, max= 768, per=66.15%, avg=759.58, stdev=25.78, samples=19 00:27:25.056 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:27:25.056 lat (usec) : 500=1.11%, 750=48.00%, 1000=0.69% 00:27:25.056 lat (msec) : 50=50.21% 00:27:25.056 cpu : usr=98.17%, sys=1.54%, ctx=18, majf=0, minf=1634 00:27:25.056 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:25.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.056 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.056 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:25.056 filename1: (groupid=0, jobs=1): err= 0: pid=2926668: Sat Apr 27 01:01:16 2024 00:27:25.056 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10012msec) 00:27:25.056 slat (nsec): min=5969, max=29422, avg=7494.42, stdev=2095.68 00:27:25.056 clat (usec): min=40711, max=41991, avg=41009.80, stdev=182.30 00:27:25.056 lat (usec): min=40717, max=42000, avg=41017.30, stdev=182.43 00:27:25.056 clat percentiles (usec): 00:27:25.056 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:27:25.056 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:27:25.056 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:27:25.056 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:27:25.056 | 99.99th=[42206] 00:27:25.056 bw ( KiB/s): min= 384, max= 416, per=33.81%, avg=388.80, stdev=11.72, samples=20 00:27:25.056 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:27:25.056 lat (msec) : 50=100.00% 00:27:25.056 cpu : usr=98.04%, sys=1.68%, ctx=16, majf=0, minf=1634 00:27:25.056 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:25.056 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.056 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:25.056 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:25.056 latency : target=0, window=0, percentile=100.00%, depth=4 00:27:25.056 00:27:25.056 Run status group 0 (all jobs): 00:27:25.056 READ: bw=1147KiB/s (1175kB/s), 390KiB/s-758KiB/s (399kB/s-776kB/s), io=11.2MiB (11.8MB), run=10004-10012msec 00:27:25.056 ----------------------------------------------------- 00:27:25.056 Suppressions used: 00:27:25.056 count bytes template 00:27:25.056 2 16 /usr/src/fio/parse.c 00:27:25.056 1 8 libtcmalloc_minimal.so 00:27:25.056 1 904 libcrypto.so 00:27:25.056 ----------------------------------------------------- 00:27:25.056 00:27:25.056 01:01:16 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:27:25.056 01:01:16 -- target/dif.sh@43 -- # local sub 00:27:25.056 01:01:16 -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.056 01:01:16 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:25.056 01:01:16 -- target/dif.sh@36 -- # local sub_id=0 00:27:25.056 01:01:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:25.056 01:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.056 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.056 01:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.056 01:01:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:25.056 01:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.056 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.056 01:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.056 01:01:16 -- target/dif.sh@45 -- # for sub in "$@" 00:27:25.056 01:01:16 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:25.056 01:01:16 -- target/dif.sh@36 -- # local sub_id=1 00:27:25.056 01:01:16 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:25.056 01:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.056 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.056 01:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.056 01:01:16 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:25.056 01:01:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.056 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 01:01:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.057 00:27:25.057 real 0m12.285s 00:27:25.057 user 0m33.409s 00:27:25.057 sys 0m0.813s 00:27:25.057 01:01:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:25.057 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 ************************************ 00:27:25.057 END TEST fio_dif_1_multi_subsystems 00:27:25.057 ************************************ 00:27:25.057 01:01:16 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:27:25.057 01:01:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:25.057 01:01:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:25.057 01:01:16 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 ************************************ 00:27:25.057 START TEST fio_dif_rand_params 00:27:25.057 ************************************ 00:27:25.057 01:01:17 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:27:25.057 01:01:17 -- target/dif.sh@100 -- # local NULL_DIF 00:27:25.057 01:01:17 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:27:25.057 01:01:17 -- target/dif.sh@103 -- # NULL_DIF=3 00:27:25.057 01:01:17 -- target/dif.sh@103 -- # bs=128k 00:27:25.057 01:01:17 -- target/dif.sh@103 -- # numjobs=3 00:27:25.057 01:01:17 -- target/dif.sh@103 -- # iodepth=3 00:27:25.057 01:01:17 -- target/dif.sh@103 -- # runtime=5 00:27:25.057 01:01:17 -- target/dif.sh@105 -- # create_subsystems 0 00:27:25.057 01:01:17 -- target/dif.sh@28 -- # local sub 00:27:25.057 01:01:17 -- target/dif.sh@30 -- # for sub in "$@" 00:27:25.057 01:01:17 -- target/dif.sh@31 -- # create_subsystem 0 00:27:25.057 01:01:17 -- target/dif.sh@18 -- # local sub_id=0 00:27:25.057 01:01:17 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:25.057 01:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.057 01:01:17 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 bdev_null0 00:27:25.057 01:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.057 01:01:17 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:25.057 01:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.057 01:01:17 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 01:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.057 01:01:17 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:25.057 01:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.057 01:01:17 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 01:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.057 01:01:17 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.057 01:01:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.057 01:01:17 -- common/autotest_common.sh@10 -- # set +x 00:27:25.057 [2024-04-27 01:01:17.041555] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.057 01:01:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.057 01:01:17 -- target/dif.sh@106 -- # fio /dev/fd/62 00:27:25.057 01:01:17 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.057 01:01:17 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.057 01:01:17 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:25.057 01:01:17 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:27:25.057 01:01:17 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:25.057 01:01:17 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:25.057 01:01:17 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:25.057 01:01:17 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:25.057 01:01:17 -- common/autotest_common.sh@1327 -- # shift 00:27:25.057 01:01:17 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:25.057 01:01:17 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:25.057 01:01:17 -- nvmf/common.sh@521 -- # config=() 00:27:25.057 01:01:17 -- target/dif.sh@82 -- # gen_fio_conf 00:27:25.057 01:01:17 -- nvmf/common.sh@521 -- # local subsystem config 00:27:25.057 01:01:17 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:25.057 01:01:17 -- target/dif.sh@54 -- # local file 00:27:25.057 01:01:17 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:25.057 { 00:27:25.057 "params": { 00:27:25.057 "name": "Nvme$subsystem", 00:27:25.057 "trtype": "$TEST_TRANSPORT", 00:27:25.057 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:25.057 "adrfam": "ipv4", 00:27:25.057 "trsvcid": "$NVMF_PORT", 00:27:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:25.057 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:25.057 "hdgst": ${hdgst:-false}, 00:27:25.057 "ddgst": ${ddgst:-false} 00:27:25.057 }, 00:27:25.057 "method": "bdev_nvme_attach_controller" 00:27:25.057 } 00:27:25.057 EOF 00:27:25.057 )") 00:27:25.057 01:01:17 -- target/dif.sh@56 -- # cat 00:27:25.057 01:01:17 -- nvmf/common.sh@543 -- # cat 00:27:25.057 01:01:17 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:25.057 01:01:17 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:25.057 01:01:17 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:25.057 01:01:17 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:25.057 01:01:17 -- target/dif.sh@72 -- # (( file <= files )) 00:27:25.057 01:01:17 -- nvmf/common.sh@545 -- # jq . 00:27:25.057 01:01:17 -- nvmf/common.sh@546 -- # IFS=, 00:27:25.057 01:01:17 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:25.057 "params": { 00:27:25.057 "name": "Nvme0", 00:27:25.057 "trtype": "tcp", 00:27:25.057 "traddr": "10.0.0.2", 00:27:25.057 "adrfam": "ipv4", 00:27:25.057 "trsvcid": "4420", 00:27:25.057 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:25.057 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:25.057 "hdgst": false, 00:27:25.057 "ddgst": false 00:27:25.057 }, 00:27:25.057 "method": "bdev_nvme_attach_controller" 00:27:25.057 }' 00:27:25.057 01:01:17 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:25.057 01:01:17 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:25.057 01:01:17 -- common/autotest_common.sh@1333 -- # break 00:27:25.057 01:01:17 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:25.057 01:01:17 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:25.057 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:25.057 ... 00:27:25.057 fio-3.35 00:27:25.057 Starting 3 threads 00:27:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:27:31.635 00:27:31.635 filename0: (groupid=0, jobs=1): err= 0: pid=2929340: Sat Apr 27 01:01:23 2024 00:27:31.635 read: IOPS=267, BW=33.4MiB/s (35.0MB/s)(169MiB/5045msec) 00:27:31.635 slat (nsec): min=6006, max=29875, avg=8411.50, stdev=2515.43 00:27:31.635 clat (usec): min=3111, max=88355, avg=11177.90, stdev=11408.39 00:27:31.635 lat (usec): min=3118, max=88363, avg=11186.31, stdev=11408.41 00:27:31.635 clat percentiles (usec): 00:27:31.635 | 1.00th=[ 3556], 5.00th=[ 4555], 10.00th=[ 5604], 20.00th=[ 6128], 00:27:31.635 | 30.00th=[ 6849], 40.00th=[ 7701], 50.00th=[ 8225], 60.00th=[ 8586], 00:27:31.635 | 70.00th=[ 8979], 80.00th=[ 9765], 90.00th=[11338], 95.00th=[47449], 00:27:31.635 | 99.00th=[49546], 99.50th=[50070], 99.90th=[52167], 99.95th=[88605], 00:27:31.635 | 99.99th=[88605] 00:27:31.635 bw ( KiB/s): min=25600, max=41472, per=31.01%, avg=34483.20, stdev=5074.36, samples=10 00:27:31.635 iops : min= 200, max= 324, avg=269.40, stdev=39.64, samples=10 00:27:31.635 lat (msec) : 4=3.26%, 10=78.21%, 20=10.01%, 50=8.01%, 100=0.52% 00:27:31.635 cpu : usr=96.49%, sys=3.21%, ctx=9, majf=0, minf=1637 00:27:31.635 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.635 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.635 issued rwts: total=1349,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.635 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.635 filename0: (groupid=0, jobs=1): err= 0: pid=2929341: Sat Apr 27 01:01:23 2024 00:27:31.635 read: IOPS=283, BW=35.4MiB/s (37.1MB/s)(179MiB/5045msec) 00:27:31.635 slat (nsec): min=4852, max=19577, avg=7270.12, stdev=1158.38 00:27:31.635 clat (usec): min=3087, max=86700, avg=10551.78, stdev=10464.48 00:27:31.635 lat (usec): min=3093, max=86708, avg=10559.05, stdev=10464.61 00:27:31.635 clat percentiles (usec): 00:27:31.635 | 1.00th=[ 3425], 5.00th=[ 3621], 10.00th=[ 4047], 20.00th=[ 5800], 00:27:31.635 | 30.00th=[ 6390], 40.00th=[ 7308], 50.00th=[ 8455], 60.00th=[ 9110], 00:27:31.635 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12649], 95.00th=[46924], 00:27:31.635 | 99.00th=[51119], 99.50th=[51643], 99.90th=[54789], 99.95th=[86508], 00:27:31.635 | 99.99th=[86508] 00:27:31.635 bw ( KiB/s): min=14848, max=50944, per=32.85%, avg=36531.20, stdev=10134.96, samples=10 00:27:31.635 iops : min= 116, max= 398, avg=285.40, stdev=79.18, samples=10 00:27:31.636 lat (msec) : 4=9.17%, 10=66.41%, 20=18.05%, 50=4.34%, 100=2.03% 00:27:31.636 cpu : usr=96.55%, sys=3.15%, ctx=6, majf=0, minf=1635 00:27:31.636 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.636 issued rwts: total=1429,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.636 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.636 filename0: (groupid=0, jobs=1): err= 0: pid=2929342: Sat Apr 27 01:01:23 2024 00:27:31.636 read: IOPS=320, BW=40.1MiB/s (42.0MB/s)(201MiB/5004msec) 00:27:31.636 slat (nsec): min=6012, max=29159, avg=8350.52, stdev=2389.55 00:27:31.636 clat (usec): min=2962, max=86396, avg=9342.17, stdev=8461.81 00:27:31.636 lat (usec): min=2978, max=86404, avg=9350.52, stdev=8461.47 00:27:31.636 clat percentiles (usec): 00:27:31.636 | 1.00th=[ 3392], 5.00th=[ 3621], 10.00th=[ 3916], 20.00th=[ 5604], 00:27:31.636 | 30.00th=[ 6194], 40.00th=[ 6718], 50.00th=[ 7832], 60.00th=[ 8848], 00:27:31.636 | 70.00th=[ 9503], 80.00th=[10290], 90.00th=[11731], 95.00th=[13698], 00:27:31.636 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52167], 99.95th=[86508], 00:27:31.636 | 99.99th=[86508] 00:27:31.636 bw ( KiB/s): min=29440, max=64128, per=36.89%, avg=41024.00, stdev=10885.14, samples=10 00:27:31.636 iops : min= 230, max= 501, avg=320.50, stdev=85.04, samples=10 00:27:31.636 lat (msec) : 4=10.97%, 10=65.86%, 20=19.13%, 50=3.24%, 100=0.81% 00:27:31.636 cpu : usr=95.68%, sys=4.00%, ctx=6, majf=0, minf=1634 00:27:31.636 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:31.636 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.636 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:31.636 issued rwts: total=1605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:31.636 latency : target=0, window=0, percentile=100.00%, depth=3 00:27:31.636 00:27:31.636 Run status group 0 (all jobs): 00:27:31.636 READ: bw=109MiB/s (114MB/s), 33.4MiB/s-40.1MiB/s (35.0MB/s-42.0MB/s), io=548MiB (574MB), run=5004-5045msec 00:27:31.636 ----------------------------------------------------- 00:27:31.636 Suppressions used: 00:27:31.636 count bytes template 00:27:31.636 5 44 /usr/src/fio/parse.c 00:27:31.636 1 8 libtcmalloc_minimal.so 00:27:31.636 1 904 libcrypto.so 00:27:31.636 ----------------------------------------------------- 00:27:31.636 00:27:31.636 01:01:23 -- target/dif.sh@107 -- # destroy_subsystems 0 00:27:31.636 01:01:23 -- target/dif.sh@43 -- # local sub 00:27:31.636 01:01:23 -- target/dif.sh@45 -- # for sub in "$@" 00:27:31.636 01:01:23 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:31.636 01:01:23 -- target/dif.sh@36 -- # local sub_id=0 00:27:31.636 01:01:23 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:31.636 01:01:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:23 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # NULL_DIF=2 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # bs=4k 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # numjobs=8 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # iodepth=16 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # runtime= 00:27:31.636 01:01:24 -- target/dif.sh@109 -- # files=2 00:27:31.636 01:01:24 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:27:31.636 01:01:24 -- target/dif.sh@28 -- # local sub 00:27:31.636 01:01:24 -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.636 01:01:24 -- target/dif.sh@31 -- # create_subsystem 0 00:27:31.636 01:01:24 -- target/dif.sh@18 -- # local sub_id=0 00:27:31.636 01:01:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 bdev_null0 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 [2024-04-27 01:01:24.039174] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.636 01:01:24 -- target/dif.sh@31 -- # create_subsystem 1 00:27:31.636 01:01:24 -- target/dif.sh@18 -- # local sub_id=1 00:27:31.636 01:01:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 bdev_null1 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@30 -- # for sub in "$@" 00:27:31.636 01:01:24 -- target/dif.sh@31 -- # create_subsystem 2 00:27:31.636 01:01:24 -- target/dif.sh@18 -- # local sub_id=2 00:27:31.636 01:01:24 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 bdev_null2 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:31.636 01:01:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:31.636 01:01:24 -- common/autotest_common.sh@10 -- # set +x 00:27:31.636 01:01:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:31.636 01:01:24 -- target/dif.sh@112 -- # fio /dev/fd/62 00:27:31.636 01:01:24 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.636 01:01:24 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.636 01:01:24 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:31.636 01:01:24 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:31.636 01:01:24 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:31.636 01:01:24 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.636 01:01:24 -- common/autotest_common.sh@1327 -- # shift 00:27:31.636 01:01:24 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:27:31.636 01:01:24 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:31.636 01:01:24 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:31.636 01:01:24 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:27:31.636 01:01:24 -- nvmf/common.sh@521 -- # config=() 00:27:31.636 01:01:24 -- target/dif.sh@82 -- # gen_fio_conf 00:27:31.636 01:01:24 -- nvmf/common.sh@521 -- # local subsystem config 00:27:31.636 01:01:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:31.636 01:01:24 -- target/dif.sh@54 -- # local file 00:27:31.636 01:01:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:31.636 { 00:27:31.636 "params": { 00:27:31.636 "name": "Nvme$subsystem", 00:27:31.636 "trtype": "$TEST_TRANSPORT", 00:27:31.636 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.636 "adrfam": "ipv4", 00:27:31.636 "trsvcid": "$NVMF_PORT", 00:27:31.636 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.636 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.636 "hdgst": ${hdgst:-false}, 00:27:31.636 "ddgst": ${ddgst:-false} 00:27:31.636 }, 00:27:31.636 "method": "bdev_nvme_attach_controller" 00:27:31.636 } 00:27:31.636 EOF 00:27:31.636 )") 00:27:31.636 01:01:24 -- target/dif.sh@56 -- # cat 00:27:31.636 01:01:24 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:31.636 01:01:24 -- nvmf/common.sh@543 -- # cat 00:27:31.636 01:01:24 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:31.636 01:01:24 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:31.636 01:01:24 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:31.636 01:01:24 -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.636 01:01:24 -- target/dif.sh@73 -- # cat 00:27:31.636 01:01:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:31.637 01:01:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:31.637 { 00:27:31.637 "params": { 00:27:31.637 "name": "Nvme$subsystem", 00:27:31.637 "trtype": "$TEST_TRANSPORT", 00:27:31.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.637 "adrfam": "ipv4", 00:27:31.637 "trsvcid": "$NVMF_PORT", 00:27:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.637 "hdgst": ${hdgst:-false}, 00:27:31.637 "ddgst": ${ddgst:-false} 00:27:31.637 }, 00:27:31.637 "method": "bdev_nvme_attach_controller" 00:27:31.637 } 00:27:31.637 EOF 00:27:31.637 )") 00:27:31.637 01:01:24 -- target/dif.sh@72 -- # (( file++ )) 00:27:31.637 01:01:24 -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.637 01:01:24 -- target/dif.sh@73 -- # cat 00:27:31.637 01:01:24 -- nvmf/common.sh@543 -- # cat 00:27:31.637 01:01:24 -- target/dif.sh@72 -- # (( file++ )) 00:27:31.637 01:01:24 -- target/dif.sh@72 -- # (( file <= files )) 00:27:31.637 01:01:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:31.637 01:01:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:31.637 { 00:27:31.637 "params": { 00:27:31.637 "name": "Nvme$subsystem", 00:27:31.637 "trtype": "$TEST_TRANSPORT", 00:27:31.637 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:31.637 "adrfam": "ipv4", 00:27:31.637 "trsvcid": "$NVMF_PORT", 00:27:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:31.637 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:31.637 "hdgst": ${hdgst:-false}, 00:27:31.637 "ddgst": ${ddgst:-false} 00:27:31.637 }, 00:27:31.637 "method": "bdev_nvme_attach_controller" 00:27:31.637 } 00:27:31.637 EOF 00:27:31.637 )") 00:27:31.637 01:01:24 -- nvmf/common.sh@543 -- # cat 00:27:31.637 01:01:24 -- nvmf/common.sh@545 -- # jq . 00:27:31.637 01:01:24 -- nvmf/common.sh@546 -- # IFS=, 00:27:31.637 01:01:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:31.637 "params": { 00:27:31.637 "name": "Nvme0", 00:27:31.637 "trtype": "tcp", 00:27:31.637 "traddr": "10.0.0.2", 00:27:31.637 "adrfam": "ipv4", 00:27:31.637 "trsvcid": "4420", 00:27:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:31.637 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:31.637 "hdgst": false, 00:27:31.637 "ddgst": false 00:27:31.637 }, 00:27:31.637 "method": "bdev_nvme_attach_controller" 00:27:31.637 },{ 00:27:31.637 "params": { 00:27:31.637 "name": "Nvme1", 00:27:31.637 "trtype": "tcp", 00:27:31.637 "traddr": "10.0.0.2", 00:27:31.637 "adrfam": "ipv4", 00:27:31.637 "trsvcid": "4420", 00:27:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:31.637 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:31.637 "hdgst": false, 00:27:31.637 "ddgst": false 00:27:31.637 }, 00:27:31.637 "method": "bdev_nvme_attach_controller" 00:27:31.637 },{ 00:27:31.637 "params": { 00:27:31.637 "name": "Nvme2", 00:27:31.637 "trtype": "tcp", 00:27:31.637 "traddr": "10.0.0.2", 00:27:31.637 "adrfam": "ipv4", 00:27:31.637 "trsvcid": "4420", 00:27:31.637 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:31.637 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:31.637 "hdgst": false, 00:27:31.637 "ddgst": false 00:27:31.637 }, 00:27:31.637 "method": "bdev_nvme_attach_controller" 00:27:31.637 }' 00:27:31.637 01:01:24 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:31.637 01:01:24 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:31.637 01:01:24 -- common/autotest_common.sh@1333 -- # break 00:27:31.637 01:01:24 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:31.637 01:01:24 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:31.896 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:31.896 ... 00:27:31.896 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:31.896 ... 00:27:31.896 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:27:31.896 ... 00:27:31.896 fio-3.35 00:27:31.896 Starting 24 threads 00:27:32.153 EAL: No free 2048 kB hugepages reported on node 1 00:27:44.385 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930975: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=503, BW=2013KiB/s (2061kB/s)(19.7MiB/10014msec) 00:27:44.385 slat (nsec): min=5976, max=54699, avg=14597.74, stdev=8479.69 00:27:44.385 clat (usec): min=5319, max=38249, avg=31676.03, stdev=2123.69 00:27:44.385 lat (usec): min=5331, max=38265, avg=31690.63, stdev=2123.73 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[23725], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.385 | 30.00th=[31589], 40.00th=[31851], 50.00th=[31851], 60.00th=[31851], 00:27:44.385 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[33162], 99.90th=[38011], 99.95th=[38011], 00:27:44.385 | 99.99th=[38011] 00:27:44.385 bw ( KiB/s): min= 1920, max= 2176, per=4.20%, avg=2009.60, stdev=73.12, samples=20 00:27:44.385 iops : min= 480, max= 544, avg=502.40, stdev=18.28, samples=20 00:27:44.385 lat (msec) : 10=0.50%, 20=0.28%, 50=99.23% 00:27:44.385 cpu : usr=99.16%, sys=0.41%, ctx=14, majf=0, minf=1634 00:27:44.385 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930976: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10039msec) 00:27:44.385 slat (usec): min=4, max=125, avg=55.71, stdev=24.01 00:27:44.385 clat (usec): min=29674, max=64758, avg=31686.90, stdev=2443.53 00:27:44.385 lat (usec): min=29702, max=64778, avg=31742.61, stdev=2441.18 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.385 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.385 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[56886], 99.90th=[64750], 99.95th=[64750], 00:27:44.385 | 99.99th=[64750] 00:27:44.385 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1990.40, stdev=77.42, samples=20 00:27:44.385 iops : min= 448, max= 512, avg=497.60, stdev=19.35, samples=20 00:27:44.385 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.385 cpu : usr=98.91%, sys=0.65%, ctx=14, majf=0, minf=1635 00:27:44.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930977: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10009msec) 00:27:44.385 slat (usec): min=5, max=144, avg=33.19, stdev=29.89 00:27:44.385 clat (usec): min=29571, max=61746, avg=31834.59, stdev=1802.25 00:27:44.385 lat (usec): min=29649, max=61774, avg=31867.78, stdev=1797.57 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.385 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.385 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[32900], 99.90th=[61604], 99.95th=[61604], 00:27:44.385 | 99.99th=[61604] 00:27:44.385 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:27:44.385 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.385 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.385 cpu : usr=98.65%, sys=0.92%, ctx=54, majf=0, minf=1634 00:27:44.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930978: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=498, BW=1993KiB/s (2041kB/s)(19.6MiB/10049msec) 00:27:44.385 slat (usec): min=7, max=104, avg=26.13, stdev=18.54 00:27:44.385 clat (usec): min=12369, max=66554, avg=31847.98, stdev=1919.00 00:27:44.385 lat (usec): min=12384, max=66609, avg=31874.11, stdev=1919.14 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.385 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.385 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[43779], 99.90th=[58459], 99.95th=[59507], 00:27:44.385 | 99.99th=[66323] 00:27:44.385 bw ( KiB/s): min= 1897, max= 2048, per=4.17%, avg=1995.80, stdev=65.79, samples=20 00:27:44.385 iops : min= 474, max= 512, avg=498.90, stdev=16.51, samples=20 00:27:44.385 lat (msec) : 20=0.04%, 50=99.60%, 100=0.36% 00:27:44.385 cpu : usr=98.77%, sys=0.75%, ctx=98, majf=0, minf=1635 00:27:44.385 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930979: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=499, BW=1999KiB/s (2047kB/s)(19.6MiB/10020msec) 00:27:44.385 slat (usec): min=6, max=100, avg=20.91, stdev=19.52 00:27:44.385 clat (usec): min=20086, max=55258, avg=31832.84, stdev=1589.53 00:27:44.385 lat (usec): min=20098, max=55288, avg=31853.75, stdev=1588.92 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.385 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.385 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[32900], 99.90th=[55313], 99.95th=[55313], 00:27:44.385 | 99.99th=[55313] 00:27:44.385 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1996.95, stdev=64.15, samples=20 00:27:44.385 iops : min= 480, max= 512, avg=499.20, stdev=16.08, samples=20 00:27:44.385 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.385 cpu : usr=98.47%, sys=0.90%, ctx=34, majf=0, minf=1636 00:27:44.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.385 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.385 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.385 filename0: (groupid=0, jobs=1): err= 0: pid=2930980: Sat Apr 27 01:01:35 2024 00:27:44.385 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.6MiB/10045msec) 00:27:44.385 slat (usec): min=5, max=137, avg=58.91, stdev=28.09 00:27:44.385 clat (usec): min=29540, max=57360, avg=31608.81, stdev=1657.08 00:27:44.385 lat (usec): min=29568, max=57373, avg=31667.72, stdev=1652.79 00:27:44.385 clat percentiles (usec): 00:27:44.385 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.385 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.385 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.385 | 99.00th=[32900], 99.50th=[40109], 99.90th=[57410], 99.95th=[57410], 00:27:44.385 | 99.99th=[57410] 00:27:44.385 bw ( KiB/s): min= 1897, max= 2048, per=4.17%, avg=1995.65, stdev=65.97, samples=20 00:27:44.385 iops : min= 474, max= 512, avg=498.90, stdev=16.51, samples=20 00:27:44.385 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.385 cpu : usr=98.82%, sys=0.77%, ctx=17, majf=0, minf=1634 00:27:44.385 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.385 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename0: (groupid=0, jobs=1): err= 0: pid=2930981: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10039msec) 00:27:44.386 slat (usec): min=5, max=125, avg=54.23, stdev=24.24 00:27:44.386 clat (usec): min=29768, max=61289, avg=31670.58, stdev=2403.43 00:27:44.386 lat (usec): min=29812, max=61314, avg=31724.82, stdev=2402.02 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.386 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:27:44.386 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.386 | 99.00th=[32900], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:27:44.386 | 99.99th=[61080] 00:27:44.386 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:27:44.386 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.386 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.386 cpu : usr=98.89%, sys=0.69%, ctx=14, majf=0, minf=1635 00:27:44.386 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename0: (groupid=0, jobs=1): err= 0: pid=2930982: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=497, BW=1990KiB/s (2037kB/s)(19.5MiB/10036msec) 00:27:44.386 slat (nsec): min=6132, max=91281, avg=22859.97, stdev=15930.05 00:27:44.386 clat (usec): min=12148, max=62656, avg=31967.95, stdev=2946.98 00:27:44.386 lat (usec): min=12163, max=62688, avg=31990.81, stdev=2946.66 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[30278], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.386 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.386 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.386 | 99.00th=[49546], 99.50th=[59507], 99.90th=[62653], 99.95th=[62653], 00:27:44.386 | 99.99th=[62653] 00:27:44.386 bw ( KiB/s): min= 1795, max= 2064, per=4.17%, avg=1994.26, stdev=77.45, samples=19 00:27:44.386 iops : min= 448, max= 516, avg=498.53, stdev=19.47, samples=19 00:27:44.386 lat (msec) : 20=0.44%, 50=98.88%, 100=0.68% 00:27:44.386 cpu : usr=99.03%, sys=0.55%, ctx=14, majf=0, minf=1634 00:27:44.386 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=2930983: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.6MiB/10045msec) 00:27:44.386 slat (usec): min=6, max=130, avg=55.08, stdev=25.92 00:27:44.386 clat (usec): min=29588, max=57360, avg=31644.10, stdev=1647.46 00:27:44.386 lat (usec): min=29620, max=57372, avg=31699.18, stdev=1643.63 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.386 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.386 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.386 | 99.00th=[32900], 99.50th=[39584], 99.90th=[57410], 99.95th=[57410], 00:27:44.386 | 99.99th=[57410] 00:27:44.386 bw ( KiB/s): min= 1897, max= 2048, per=4.17%, avg=1995.65, stdev=65.97, samples=20 00:27:44.386 iops : min= 474, max= 512, avg=498.90, stdev=16.51, samples=20 00:27:44.386 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.386 cpu : usr=98.88%, sys=0.67%, ctx=15, majf=0, minf=1633 00:27:44.386 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=5008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=2930984: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10012msec) 00:27:44.386 slat (nsec): min=5994, max=92208, avg=19070.63, stdev=15407.82 00:27:44.386 clat (usec): min=7011, max=50032, avg=31636.91, stdev=2117.62 00:27:44.386 lat (usec): min=7023, max=50056, avg=31655.98, stdev=2117.83 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[24511], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.386 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.386 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.386 | 99.00th=[32900], 99.50th=[33162], 99.90th=[35390], 99.95th=[36963], 00:27:44.386 | 99.99th=[50070] 00:27:44.386 bw ( KiB/s): min= 1920, max= 2180, per=4.20%, avg=2009.80, stdev=73.60, samples=20 00:27:44.386 iops : min= 480, max= 545, avg=502.45, stdev=18.40, samples=20 00:27:44.386 lat (msec) : 10=0.46%, 20=0.40%, 50=99.11%, 100=0.04% 00:27:44.386 cpu : usr=98.90%, sys=0.64%, ctx=15, majf=0, minf=1637 00:27:44.386 IO depths : 1=6.1%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=2930985: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10012msec) 00:27:44.386 slat (nsec): min=5722, max=54548, avg=16808.02, stdev=8698.75 00:27:44.386 clat (usec): min=6409, max=38098, avg=31626.51, stdev=2187.42 00:27:44.386 lat (usec): min=6422, max=38110, avg=31643.32, stdev=2187.73 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[24249], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.386 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.386 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.386 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33817], 99.95th=[38011], 00:27:44.386 | 99.99th=[38011] 00:27:44.386 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2014.32, stdev=71.93, samples=19 00:27:44.386 iops : min= 480, max= 544, avg=503.58, stdev=17.98, samples=19 00:27:44.386 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:27:44.386 cpu : usr=98.97%, sys=0.59%, ctx=14, majf=0, minf=1634 00:27:44.386 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=2930986: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10037msec) 00:27:44.386 slat (usec): min=6, max=136, avg=55.83, stdev=26.85 00:27:44.386 clat (usec): min=29597, max=60456, avg=31628.16, stdev=2329.66 00:27:44.386 lat (usec): min=29609, max=60468, avg=31683.99, stdev=2328.74 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30540], 20.00th=[30802], 00:27:44.386 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:27:44.386 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.386 | 99.00th=[32900], 99.50th=[59507], 99.90th=[60031], 99.95th=[60031], 00:27:44.386 | 99.99th=[60556] 00:27:44.386 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:27:44.386 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.386 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.386 cpu : usr=98.94%, sys=0.63%, ctx=15, majf=0, minf=1635 00:27:44.386 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.386 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.386 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.386 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.386 filename1: (groupid=0, jobs=1): err= 0: pid=2930987: Sat Apr 27 01:01:35 2024 00:27:44.386 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.5MiB/10042msec) 00:27:44.386 slat (usec): min=4, max=125, avg=54.75, stdev=22.41 00:27:44.386 clat (usec): min=27515, max=64753, avg=31691.81, stdev=2533.76 00:27:44.386 lat (usec): min=27523, max=64773, avg=31746.56, stdev=2532.03 00:27:44.386 clat percentiles (usec): 00:27:44.386 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.387 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.387 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[59507], 99.90th=[64750], 99.95th=[64750], 00:27:44.387 | 99.99th=[64750] 00:27:44.387 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1990.40, stdev=77.42, samples=20 00:27:44.387 iops : min= 448, max= 512, avg=497.60, stdev=19.35, samples=20 00:27:44.387 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.387 cpu : usr=98.35%, sys=0.90%, ctx=89, majf=0, minf=1633 00:27:44.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename1: (groupid=0, jobs=1): err= 0: pid=2930988: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=498, BW=1994KiB/s (2042kB/s)(19.5MiB/10012msec) 00:27:44.387 slat (usec): min=5, max=128, avg=17.96, stdev=21.26 00:27:44.387 clat (usec): min=29467, max=64623, avg=31949.67, stdev=1936.11 00:27:44.387 lat (usec): min=29499, max=64646, avg=31967.63, stdev=1934.63 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[30802], 5.00th=[31065], 10.00th=[31065], 20.00th=[31327], 00:27:44.387 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.387 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[33162], 99.90th=[64750], 99.95th=[64750], 00:27:44.387 | 99.99th=[64750] 00:27:44.387 bw ( KiB/s): min= 1788, max= 2048, per=4.17%, avg=1993.89, stdev=78.27, samples=19 00:27:44.387 iops : min= 447, max= 512, avg=498.47, stdev=19.57, samples=19 00:27:44.387 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.387 cpu : usr=98.80%, sys=0.78%, ctx=43, majf=0, minf=1634 00:27:44.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename1: (groupid=0, jobs=1): err= 0: pid=2930989: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=497, BW=1988KiB/s (2036kB/s)(19.5MiB/10044msec) 00:27:44.387 slat (usec): min=6, max=148, avg=56.90, stdev=31.00 00:27:44.387 clat (usec): min=28770, max=66772, avg=31761.28, stdev=2545.07 00:27:44.387 lat (usec): min=28780, max=66799, avg=31818.18, stdev=2540.99 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[30016], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.387 | 30.00th=[31327], 40.00th=[31327], 50.00th=[31589], 60.00th=[31851], 00:27:44.387 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[57410], 99.90th=[66847], 99.95th=[66847], 00:27:44.387 | 99.99th=[66847] 00:27:44.387 bw ( KiB/s): min= 1792, max= 2048, per=4.16%, avg=1990.00, stdev=76.60, samples=20 00:27:44.387 iops : min= 448, max= 512, avg=497.50, stdev=19.15, samples=20 00:27:44.387 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.387 cpu : usr=98.84%, sys=0.72%, ctx=35, majf=0, minf=1634 00:27:44.387 IO depths : 1=1.0%, 2=7.2%, 4=25.0%, 8=55.3%, 16=11.5%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename1: (groupid=0, jobs=1): err= 0: pid=2930990: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10035msec) 00:27:44.387 slat (nsec): min=6033, max=94874, avg=24319.75, stdev=18291.09 00:27:44.387 clat (usec): min=13167, max=61800, avg=31966.30, stdev=2543.57 00:27:44.387 lat (usec): min=13200, max=61827, avg=31990.62, stdev=2542.97 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[30540], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.387 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.387 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[58983], 99.90th=[61604], 99.95th=[61604], 00:27:44.387 | 99.99th=[61604] 00:27:44.387 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1993.26, stdev=56.02, samples=19 00:27:44.387 iops : min= 480, max= 512, avg=498.32, stdev=14.00, samples=19 00:27:44.387 lat (msec) : 20=0.14%, 50=99.22%, 100=0.64% 00:27:44.387 cpu : usr=98.23%, sys=1.00%, ctx=58, majf=0, minf=1632 00:27:44.387 IO depths : 1=0.9%, 2=7.2%, 4=24.9%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.4%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4990,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename2: (groupid=0, jobs=1): err= 0: pid=2930991: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10008msec) 00:27:44.387 slat (usec): min=5, max=131, avg=40.53, stdev=27.90 00:27:44.387 clat (usec): min=29545, max=60623, avg=31789.02, stdev=1748.26 00:27:44.387 lat (usec): min=29582, max=60650, avg=31829.55, stdev=1743.40 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.387 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.387 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[33162], 99.90th=[60556], 99.95th=[60556], 00:27:44.387 | 99.99th=[60556] 00:27:44.387 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:27:44.387 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.387 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.387 cpu : usr=98.99%, sys=0.58%, ctx=15, majf=0, minf=1634 00:27:44.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename2: (groupid=0, jobs=1): err= 0: pid=2930992: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=497, BW=1990KiB/s (2038kB/s)(19.5MiB/10035msec) 00:27:44.387 slat (usec): min=6, max=124, avg=54.55, stdev=24.63 00:27:44.387 clat (usec): min=29661, max=60437, avg=31646.84, stdev=2255.00 00:27:44.387 lat (usec): min=29674, max=60447, avg=31701.39, stdev=2253.84 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.387 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31327], 60.00th=[31589], 00:27:44.387 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[57410], 99.90th=[60031], 99.95th=[60556], 00:27:44.387 | 99.99th=[60556] 00:27:44.387 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.11, stdev=64.93, samples=19 00:27:44.387 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.387 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.387 cpu : usr=99.07%, sys=0.51%, ctx=13, majf=0, minf=1631 00:27:44.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename2: (groupid=0, jobs=1): err= 0: pid=2930993: Sat Apr 27 01:01:35 2024 00:27:44.387 read: IOPS=503, BW=2015KiB/s (2063kB/s)(19.7MiB/10005msec) 00:27:44.387 slat (usec): min=6, max=128, avg=16.51, stdev= 8.86 00:27:44.387 clat (usec): min=6638, max=36085, avg=31606.16, stdev=2320.98 00:27:44.387 lat (usec): min=6649, max=36132, avg=31622.67, stdev=2321.48 00:27:44.387 clat percentiles (usec): 00:27:44.387 | 1.00th=[23987], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.387 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.387 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.387 | 99.00th=[32900], 99.50th=[32900], 99.90th=[35914], 99.95th=[35914], 00:27:44.387 | 99.99th=[35914] 00:27:44.387 bw ( KiB/s): min= 1920, max= 2176, per=4.21%, avg=2014.32, stdev=83.63, samples=19 00:27:44.387 iops : min= 480, max= 544, avg=503.58, stdev=20.91, samples=19 00:27:44.387 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:27:44.387 cpu : usr=99.02%, sys=0.56%, ctx=15, majf=0, minf=1635 00:27:44.387 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.387 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.387 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.387 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.387 filename2: (groupid=0, jobs=1): err= 0: pid=2930994: Sat Apr 27 01:01:35 2024 00:27:44.388 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10040msec) 00:27:44.388 slat (usec): min=4, max=182, avg=55.98, stdev=24.29 00:27:44.388 clat (usec): min=29664, max=66003, avg=31699.90, stdev=2497.85 00:27:44.388 lat (usec): min=29696, max=66023, avg=31755.88, stdev=2495.13 00:27:44.388 clat percentiles (usec): 00:27:44.388 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.388 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.388 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.388 | 99.00th=[32900], 99.50th=[57410], 99.90th=[65799], 99.95th=[65799], 00:27:44.388 | 99.99th=[65799] 00:27:44.388 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1990.15, stdev=77.42, samples=20 00:27:44.388 iops : min= 448, max= 512, avg=497.50, stdev=19.45, samples=20 00:27:44.388 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.388 cpu : usr=98.96%, sys=0.60%, ctx=16, majf=0, minf=1633 00:27:44.388 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.388 filename2: (groupid=0, jobs=1): err= 0: pid=2930995: Sat Apr 27 01:01:35 2024 00:27:44.388 read: IOPS=503, BW=2014KiB/s (2062kB/s)(19.7MiB/10010msec) 00:27:44.388 slat (nsec): min=5673, max=54438, avg=16626.30, stdev=8542.88 00:27:44.388 clat (usec): min=6449, max=33755, avg=31635.63, stdev=2201.98 00:27:44.388 lat (usec): min=6461, max=33778, avg=31652.25, stdev=2202.13 00:27:44.388 clat percentiles (usec): 00:27:44.388 | 1.00th=[28181], 5.00th=[30802], 10.00th=[31065], 20.00th=[31327], 00:27:44.388 | 30.00th=[31589], 40.00th=[31589], 50.00th=[31851], 60.00th=[31851], 00:27:44.388 | 70.00th=[32113], 80.00th=[32375], 90.00th=[32637], 95.00th=[32637], 00:27:44.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[33817], 99.95th=[33817], 00:27:44.388 | 99.99th=[33817] 00:27:44.388 bw ( KiB/s): min= 1920, max= 2180, per=4.21%, avg=2014.53, stdev=72.43, samples=19 00:27:44.388 iops : min= 480, max= 545, avg=503.63, stdev=18.11, samples=19 00:27:44.388 lat (msec) : 10=0.63%, 20=0.32%, 50=99.05% 00:27:44.388 cpu : usr=98.93%, sys=0.62%, ctx=13, majf=0, minf=1635 00:27:44.388 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:27:44.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 issued rwts: total=5040,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.388 filename2: (groupid=0, jobs=1): err= 0: pid=2930996: Sat Apr 27 01:01:35 2024 00:27:44.388 read: IOPS=498, BW=1995KiB/s (2042kB/s)(19.5MiB/10011msec) 00:27:44.388 slat (usec): min=3, max=118, avg=43.46, stdev=26.12 00:27:44.388 clat (usec): min=26275, max=67589, avg=31757.26, stdev=1917.25 00:27:44.388 lat (usec): min=26289, max=67610, avg=31800.73, stdev=1913.87 00:27:44.388 clat percentiles (usec): 00:27:44.388 | 1.00th=[30278], 5.00th=[30802], 10.00th=[30802], 20.00th=[31065], 00:27:44.388 | 30.00th=[31327], 40.00th=[31589], 50.00th=[31589], 60.00th=[31851], 00:27:44.388 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32637], 95.00th=[32637], 00:27:44.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[63177], 99.95th=[63177], 00:27:44.388 | 99.99th=[67634] 00:27:44.388 bw ( KiB/s): min= 1795, max= 2048, per=4.17%, avg=1994.26, stdev=77.26, samples=19 00:27:44.388 iops : min= 448, max= 512, avg=498.53, stdev=19.42, samples=19 00:27:44.388 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.388 cpu : usr=99.04%, sys=0.57%, ctx=14, majf=0, minf=1637 00:27:44.388 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.388 filename2: (groupid=0, jobs=1): err= 0: pid=2930997: Sat Apr 27 01:01:35 2024 00:27:44.388 read: IOPS=498, BW=1995KiB/s (2043kB/s)(19.5MiB/10007msec) 00:27:44.388 slat (usec): min=4, max=126, avg=57.08, stdev=26.28 00:27:44.388 clat (usec): min=26787, max=63388, avg=31610.87, stdev=1713.12 00:27:44.388 lat (usec): min=26795, max=63408, avg=31667.95, stdev=1709.93 00:27:44.388 clat percentiles (usec): 00:27:44.388 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.388 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.388 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.388 | 99.00th=[32900], 99.50th=[32900], 99.90th=[58983], 99.95th=[58983], 00:27:44.388 | 99.99th=[63177] 00:27:44.388 bw ( KiB/s): min= 1920, max= 2048, per=4.17%, avg=1994.26, stdev=64.74, samples=19 00:27:44.388 iops : min= 480, max= 512, avg=498.53, stdev=16.23, samples=19 00:27:44.388 lat (msec) : 50=99.68%, 100=0.32% 00:27:44.388 cpu : usr=99.23%, sys=0.39%, ctx=16, majf=0, minf=1636 00:27:44.388 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.388 filename2: (groupid=0, jobs=1): err= 0: pid=2930998: Sat Apr 27 01:01:35 2024 00:27:44.388 read: IOPS=497, BW=1989KiB/s (2037kB/s)(19.5MiB/10039msec) 00:27:44.388 slat (usec): min=5, max=131, avg=55.44, stdev=23.34 00:27:44.388 clat (usec): min=26961, max=68400, avg=31664.17, stdev=2439.15 00:27:44.388 lat (usec): min=26976, max=68426, avg=31719.61, stdev=2437.47 00:27:44.388 clat percentiles (usec): 00:27:44.388 | 1.00th=[30278], 5.00th=[30540], 10.00th=[30802], 20.00th=[31065], 00:27:44.388 | 30.00th=[31065], 40.00th=[31327], 50.00th=[31589], 60.00th=[31589], 00:27:44.388 | 70.00th=[31851], 80.00th=[32113], 90.00th=[32375], 95.00th=[32637], 00:27:44.388 | 99.00th=[32900], 99.50th=[56886], 99.90th=[64226], 99.95th=[64226], 00:27:44.388 | 99.99th=[68682] 00:27:44.388 bw ( KiB/s): min= 1795, max= 2048, per=4.16%, avg=1990.55, stdev=77.01, samples=20 00:27:44.388 iops : min= 448, max= 512, avg=497.60, stdev=19.35, samples=20 00:27:44.388 lat (msec) : 50=99.36%, 100=0.64% 00:27:44.388 cpu : usr=99.05%, sys=0.57%, ctx=13, majf=0, minf=1636 00:27:44.388 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:27:44.388 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:44.388 issued rwts: total=4992,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:44.388 latency : target=0, window=0, percentile=100.00%, depth=16 00:27:44.388 00:27:44.388 Run status group 0 (all jobs): 00:27:44.388 READ: bw=46.7MiB/s (49.0MB/s), 1988KiB/s-2015KiB/s (2036kB/s-2063kB/s), io=469MiB (492MB), run=10005-10049msec 00:27:44.388 ----------------------------------------------------- 00:27:44.388 Suppressions used: 00:27:44.388 count bytes template 00:27:44.388 45 402 /usr/src/fio/parse.c 00:27:44.388 1 8 libtcmalloc_minimal.so 00:27:44.388 1 904 libcrypto.so 00:27:44.388 ----------------------------------------------------- 00:27:44.388 00:27:44.388 01:01:36 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:27:44.388 01:01:36 -- target/dif.sh@43 -- # local sub 00:27:44.388 01:01:36 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.388 01:01:36 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:44.388 01:01:36 -- target/dif.sh@36 -- # local sub_id=0 00:27:44.388 01:01:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:44.388 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.388 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.388 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.388 01:01:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:44.388 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.388 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.388 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.388 01:01:36 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.388 01:01:36 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:44.388 01:01:36 -- target/dif.sh@36 -- # local sub_id=1 00:27:44.388 01:01:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:44.388 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.388 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.388 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.388 01:01:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:44.388 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.388 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.388 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.388 01:01:36 -- target/dif.sh@45 -- # for sub in "$@" 00:27:44.388 01:01:36 -- target/dif.sh@46 -- # destroy_subsystem 2 00:27:44.388 01:01:36 -- target/dif.sh@36 -- # local sub_id=2 00:27:44.389 01:01:36 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # NULL_DIF=1 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # numjobs=2 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # iodepth=8 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # runtime=5 00:27:44.389 01:01:36 -- target/dif.sh@115 -- # files=1 00:27:44.389 01:01:36 -- target/dif.sh@117 -- # create_subsystems 0 1 00:27:44.389 01:01:36 -- target/dif.sh@28 -- # local sub 00:27:44.389 01:01:36 -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.389 01:01:36 -- target/dif.sh@31 -- # create_subsystem 0 00:27:44.389 01:01:36 -- target/dif.sh@18 -- # local sub_id=0 00:27:44.389 01:01:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 bdev_null0 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 [2024-04-27 01:01:36.630880] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@30 -- # for sub in "$@" 00:27:44.389 01:01:36 -- target/dif.sh@31 -- # create_subsystem 1 00:27:44.389 01:01:36 -- target/dif.sh@18 -- # local sub_id=1 00:27:44.389 01:01:36 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 bdev_null1 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:44.389 01:01:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.389 01:01:36 -- common/autotest_common.sh@10 -- # set +x 00:27:44.389 01:01:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:44.389 01:01:36 -- target/dif.sh@118 -- # fio /dev/fd/62 00:27:44.389 01:01:36 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.389 01:01:36 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.389 01:01:36 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:44.389 01:01:36 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:44.389 01:01:36 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:44.389 01:01:36 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.389 01:01:36 -- common/autotest_common.sh@1327 -- # shift 00:27:44.389 01:01:36 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:44.389 01:01:36 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:27:44.389 01:01:36 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:44.389 01:01:36 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:27:44.389 01:01:36 -- nvmf/common.sh@521 -- # config=() 00:27:44.389 01:01:36 -- nvmf/common.sh@521 -- # local subsystem config 00:27:44.389 01:01:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:44.389 01:01:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:44.389 { 00:27:44.389 "params": { 00:27:44.389 "name": "Nvme$subsystem", 00:27:44.389 "trtype": "$TEST_TRANSPORT", 00:27:44.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.389 "adrfam": "ipv4", 00:27:44.389 "trsvcid": "$NVMF_PORT", 00:27:44.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.389 "hdgst": ${hdgst:-false}, 00:27:44.389 "ddgst": ${ddgst:-false} 00:27:44.389 }, 00:27:44.389 "method": "bdev_nvme_attach_controller" 00:27:44.389 } 00:27:44.389 EOF 00:27:44.389 )") 00:27:44.389 01:01:36 -- target/dif.sh@82 -- # gen_fio_conf 00:27:44.389 01:01:36 -- target/dif.sh@54 -- # local file 00:27:44.389 01:01:36 -- target/dif.sh@56 -- # cat 00:27:44.389 01:01:36 -- nvmf/common.sh@543 -- # cat 00:27:44.389 01:01:36 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:44.389 01:01:36 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:44.389 01:01:36 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:44.389 01:01:36 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:44.389 01:01:36 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:44.389 01:01:36 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:44.389 { 00:27:44.389 "params": { 00:27:44.389 "name": "Nvme$subsystem", 00:27:44.389 "trtype": "$TEST_TRANSPORT", 00:27:44.389 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:44.389 "adrfam": "ipv4", 00:27:44.389 "trsvcid": "$NVMF_PORT", 00:27:44.389 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:44.389 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:44.389 "hdgst": ${hdgst:-false}, 00:27:44.389 "ddgst": ${ddgst:-false} 00:27:44.389 }, 00:27:44.389 "method": "bdev_nvme_attach_controller" 00:27:44.389 } 00:27:44.389 EOF 00:27:44.389 )") 00:27:44.389 01:01:36 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.389 01:01:36 -- target/dif.sh@73 -- # cat 00:27:44.389 01:01:36 -- nvmf/common.sh@543 -- # cat 00:27:44.389 01:01:36 -- target/dif.sh@72 -- # (( file++ )) 00:27:44.389 01:01:36 -- target/dif.sh@72 -- # (( file <= files )) 00:27:44.389 01:01:36 -- nvmf/common.sh@545 -- # jq . 00:27:44.389 01:01:36 -- nvmf/common.sh@546 -- # IFS=, 00:27:44.389 01:01:36 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:44.389 "params": { 00:27:44.389 "name": "Nvme0", 00:27:44.389 "trtype": "tcp", 00:27:44.389 "traddr": "10.0.0.2", 00:27:44.389 "adrfam": "ipv4", 00:27:44.389 "trsvcid": "4420", 00:27:44.389 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:44.389 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:44.389 "hdgst": false, 00:27:44.389 "ddgst": false 00:27:44.389 }, 00:27:44.389 "method": "bdev_nvme_attach_controller" 00:27:44.389 },{ 00:27:44.389 "params": { 00:27:44.389 "name": "Nvme1", 00:27:44.389 "trtype": "tcp", 00:27:44.389 "traddr": "10.0.0.2", 00:27:44.389 "adrfam": "ipv4", 00:27:44.389 "trsvcid": "4420", 00:27:44.389 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:44.389 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:44.389 "hdgst": false, 00:27:44.389 "ddgst": false 00:27:44.389 }, 00:27:44.389 "method": "bdev_nvme_attach_controller" 00:27:44.389 }' 00:27:44.389 01:01:36 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:44.389 01:01:36 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:44.389 01:01:36 -- common/autotest_common.sh@1333 -- # break 00:27:44.390 01:01:36 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:44.390 01:01:36 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:44.647 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:44.647 ... 00:27:44.647 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:27:44.647 ... 00:27:44.647 fio-3.35 00:27:44.647 Starting 4 threads 00:27:44.647 EAL: No free 2048 kB hugepages reported on node 1 00:27:51.206 00:27:51.206 filename0: (groupid=0, jobs=1): err= 0: pid=2933511: Sat Apr 27 01:01:42 2024 00:27:51.206 read: IOPS=2395, BW=18.7MiB/s (19.6MB/s)(93.6MiB/5001msec) 00:27:51.206 slat (usec): min=3, max=132, avg= 9.88, stdev= 8.36 00:27:51.206 clat (usec): min=723, max=6228, avg=3309.91, stdev=639.33 00:27:51.206 lat (usec): min=733, max=6236, avg=3319.79, stdev=639.09 00:27:51.206 clat percentiles (usec): 00:27:51.206 | 1.00th=[ 2114], 5.00th=[ 2474], 10.00th=[ 2671], 20.00th=[ 2900], 00:27:51.206 | 30.00th=[ 3032], 40.00th=[ 3130], 50.00th=[ 3195], 60.00th=[ 3261], 00:27:51.206 | 70.00th=[ 3392], 80.00th=[ 3589], 90.00th=[ 4146], 95.00th=[ 4817], 00:27:51.206 | 99.00th=[ 5342], 99.50th=[ 5538], 99.90th=[ 5997], 99.95th=[ 6128], 00:27:51.206 | 99.99th=[ 6194] 00:27:51.206 bw ( KiB/s): min=17568, max=20208, per=24.13%, avg=19052.44, stdev=1113.83, samples=9 00:27:51.206 iops : min= 2196, max= 2526, avg=2381.56, stdev=139.23, samples=9 00:27:51.206 lat (usec) : 750=0.03%, 1000=0.03% 00:27:51.206 lat (msec) : 2=0.70%, 4=87.63%, 10=11.61% 00:27:51.206 cpu : usr=97.78%, sys=1.92%, ctx=6, majf=0, minf=1634 00:27:51.206 IO depths : 1=0.1%, 2=6.1%, 4=65.1%, 8=28.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.206 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.206 complete : 0=0.0%, 4=93.4%, 8=6.6%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.206 issued rwts: total=11980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.206 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.206 filename0: (groupid=0, jobs=1): err= 0: pid=2933512: Sat Apr 27 01:01:42 2024 00:27:51.206 read: IOPS=2580, BW=20.2MiB/s (21.1MB/s)(102MiB/5044msec) 00:27:51.206 slat (usec): min=3, max=141, avg=11.34, stdev= 9.13 00:27:51.206 clat (usec): min=879, max=46215, avg=3050.76, stdev=820.08 00:27:51.206 lat (usec): min=887, max=46222, avg=3062.11, stdev=821.01 00:27:51.206 clat percentiles (usec): 00:27:51.206 | 1.00th=[ 1926], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2638], 00:27:51.207 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 3032], 60.00th=[ 3163], 00:27:51.207 | 70.00th=[ 3228], 80.00th=[ 3326], 90.00th=[ 3589], 95.00th=[ 3851], 00:27:51.207 | 99.00th=[ 4883], 99.50th=[ 4883], 99.90th=[ 5473], 99.95th=[ 7373], 00:27:51.207 | 99.99th=[43254] 00:27:51.207 bw ( KiB/s): min=18944, max=22176, per=26.37%, avg=20817.60, stdev=1075.07, samples=10 00:27:51.207 iops : min= 2368, max= 2772, avg=2602.20, stdev=134.38, samples=10 00:27:51.207 lat (usec) : 1000=0.05% 00:27:51.207 lat (msec) : 2=1.41%, 4=94.32%, 10=4.20%, 50=0.02% 00:27:51.207 cpu : usr=97.20%, sys=2.48%, ctx=6, majf=0, minf=1636 00:27:51.207 IO depths : 1=0.1%, 2=8.2%, 4=62.1%, 8=29.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 complete : 0=0.0%, 4=93.8%, 8=6.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 issued rwts: total=13014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.207 filename1: (groupid=0, jobs=1): err= 0: pid=2933513: Sat Apr 27 01:01:42 2024 00:27:51.207 read: IOPS=2620, BW=20.5MiB/s (21.5MB/s)(102MiB/5002msec) 00:27:51.207 slat (usec): min=3, max=127, avg= 9.52, stdev= 7.72 00:27:51.207 clat (usec): min=642, max=6480, avg=3023.43, stdev=559.38 00:27:51.207 lat (usec): min=651, max=6501, avg=3032.95, stdev=559.99 00:27:51.207 clat percentiles (usec): 00:27:51.207 | 1.00th=[ 1909], 5.00th=[ 2114], 10.00th=[ 2409], 20.00th=[ 2638], 00:27:51.207 | 30.00th=[ 2769], 40.00th=[ 2900], 50.00th=[ 2999], 60.00th=[ 3163], 00:27:51.207 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3589], 95.00th=[ 3949], 00:27:51.207 | 99.00th=[ 4883], 99.50th=[ 5145], 99.90th=[ 5604], 99.95th=[ 5866], 00:27:51.207 | 99.99th=[ 6194] 00:27:51.207 bw ( KiB/s): min=19286, max=24688, per=26.55%, avg=20960.60, stdev=1562.45, samples=10 00:27:51.207 iops : min= 2410, max= 3086, avg=2620.00, stdev=195.40, samples=10 00:27:51.207 lat (usec) : 750=0.01%, 1000=0.02% 00:27:51.207 lat (msec) : 2=4.27%, 4=90.97%, 10=4.73% 00:27:51.207 cpu : usr=97.92%, sys=1.78%, ctx=7, majf=0, minf=1637 00:27:51.207 IO depths : 1=0.1%, 2=11.0%, 4=60.7%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 issued rwts: total=13106,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.207 filename1: (groupid=0, jobs=1): err= 0: pid=2933514: Sat Apr 27 01:01:42 2024 00:27:51.207 read: IOPS=2335, BW=18.2MiB/s (19.1MB/s)(91.2MiB/5001msec) 00:27:51.207 slat (usec): min=4, max=132, avg= 9.68, stdev= 8.19 00:27:51.207 clat (usec): min=720, max=6390, avg=3396.98, stdev=651.02 00:27:51.207 lat (usec): min=728, max=6398, avg=3406.66, stdev=650.66 00:27:51.207 clat percentiles (usec): 00:27:51.207 | 1.00th=[ 2212], 5.00th=[ 2638], 10.00th=[ 2835], 20.00th=[ 2966], 00:27:51.207 | 30.00th=[ 3130], 40.00th=[ 3163], 50.00th=[ 3228], 60.00th=[ 3326], 00:27:51.207 | 70.00th=[ 3458], 80.00th=[ 3654], 90.00th=[ 4424], 95.00th=[ 4883], 00:27:51.207 | 99.00th=[ 5407], 99.50th=[ 5538], 99.90th=[ 5932], 99.95th=[ 6128], 00:27:51.207 | 99.99th=[ 6390] 00:27:51.207 bw ( KiB/s): min=17728, max=20336, per=23.70%, avg=18712.89, stdev=863.60, samples=9 00:27:51.207 iops : min= 2216, max= 2542, avg=2339.11, stdev=107.95, samples=9 00:27:51.207 lat (usec) : 750=0.01%, 1000=0.03% 00:27:51.207 lat (msec) : 2=0.48%, 4=85.79%, 10=13.70% 00:27:51.207 cpu : usr=98.02%, sys=1.68%, ctx=7, majf=0, minf=1636 00:27:51.207 IO depths : 1=0.1%, 2=5.0%, 4=67.2%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:51.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:51.207 issued rwts: total=11680,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:51.207 latency : target=0, window=0, percentile=100.00%, depth=8 00:27:51.207 00:27:51.207 Run status group 0 (all jobs): 00:27:51.207 READ: bw=77.1MiB/s (80.8MB/s), 18.2MiB/s-20.5MiB/s (19.1MB/s-21.5MB/s), io=389MiB (408MB), run=5001-5044msec 00:27:51.207 ----------------------------------------------------- 00:27:51.207 Suppressions used: 00:27:51.207 count bytes template 00:27:51.207 6 52 /usr/src/fio/parse.c 00:27:51.207 1 8 libtcmalloc_minimal.so 00:27:51.207 1 904 libcrypto.so 00:27:51.207 ----------------------------------------------------- 00:27:51.207 00:27:51.207 01:01:43 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:27:51.207 01:01:43 -- target/dif.sh@43 -- # local sub 00:27:51.207 01:01:43 -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.207 01:01:43 -- target/dif.sh@46 -- # destroy_subsystem 0 00:27:51.207 01:01:43 -- target/dif.sh@36 -- # local sub_id=0 00:27:51.207 01:01:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@45 -- # for sub in "$@" 00:27:51.207 01:01:43 -- target/dif.sh@46 -- # destroy_subsystem 1 00:27:51.207 01:01:43 -- target/dif.sh@36 -- # local sub_id=1 00:27:51.207 01:01:43 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 00:27:51.207 real 0m26.575s 00:27:51.207 user 5m31.121s 00:27:51.207 sys 0m4.045s 00:27:51.207 01:01:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 ************************************ 00:27:51.207 END TEST fio_dif_rand_params 00:27:51.207 ************************************ 00:27:51.207 01:01:43 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:27:51.207 01:01:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:27:51.207 01:01:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 ************************************ 00:27:51.207 START TEST fio_dif_digest 00:27:51.207 ************************************ 00:27:51.207 01:01:43 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:27:51.207 01:01:43 -- target/dif.sh@123 -- # local NULL_DIF 00:27:51.207 01:01:43 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:27:51.207 01:01:43 -- target/dif.sh@125 -- # local hdgst ddgst 00:27:51.207 01:01:43 -- target/dif.sh@127 -- # NULL_DIF=3 00:27:51.207 01:01:43 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:27:51.207 01:01:43 -- target/dif.sh@127 -- # numjobs=3 00:27:51.207 01:01:43 -- target/dif.sh@127 -- # iodepth=3 00:27:51.207 01:01:43 -- target/dif.sh@127 -- # runtime=10 00:27:51.207 01:01:43 -- target/dif.sh@128 -- # hdgst=true 00:27:51.207 01:01:43 -- target/dif.sh@128 -- # ddgst=true 00:27:51.207 01:01:43 -- target/dif.sh@130 -- # create_subsystems 0 00:27:51.207 01:01:43 -- target/dif.sh@28 -- # local sub 00:27:51.207 01:01:43 -- target/dif.sh@30 -- # for sub in "$@" 00:27:51.207 01:01:43 -- target/dif.sh@31 -- # create_subsystem 0 00:27:51.207 01:01:43 -- target/dif.sh@18 -- # local sub_id=0 00:27:51.207 01:01:43 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 bdev_null0 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:51.207 01:01:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:51.207 01:01:43 -- common/autotest_common.sh@10 -- # set +x 00:27:51.207 [2024-04-27 01:01:43.730838] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:51.207 01:01:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:51.207 01:01:43 -- target/dif.sh@131 -- # fio /dev/fd/62 00:27:51.207 01:01:43 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.207 01:01:43 -- common/autotest_common.sh@1342 -- # fio_plugin /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.207 01:01:43 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:27:51.207 01:01:43 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:51.207 01:01:43 -- common/autotest_common.sh@1325 -- # local sanitizers 00:27:51.207 01:01:43 -- common/autotest_common.sh@1326 -- # local plugin=/var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.207 01:01:43 -- common/autotest_common.sh@1327 -- # shift 00:27:51.207 01:01:43 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:27:51.207 01:01:43 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:27:51.207 01:01:43 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:27:51.207 01:01:43 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:27:51.207 01:01:43 -- nvmf/common.sh@521 -- # config=() 00:27:51.207 01:01:43 -- nvmf/common.sh@521 -- # local subsystem config 00:27:51.207 01:01:43 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:27:51.207 01:01:43 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:27:51.207 { 00:27:51.207 "params": { 00:27:51.207 "name": "Nvme$subsystem", 00:27:51.207 "trtype": "$TEST_TRANSPORT", 00:27:51.207 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:51.207 "adrfam": "ipv4", 00:27:51.207 "trsvcid": "$NVMF_PORT", 00:27:51.207 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:51.208 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:51.208 "hdgst": ${hdgst:-false}, 00:27:51.208 "ddgst": ${ddgst:-false} 00:27:51.208 }, 00:27:51.208 "method": "bdev_nvme_attach_controller" 00:27:51.208 } 00:27:51.208 EOF 00:27:51.208 )") 00:27:51.208 01:01:43 -- target/dif.sh@82 -- # gen_fio_conf 00:27:51.208 01:01:43 -- target/dif.sh@54 -- # local file 00:27:51.208 01:01:43 -- target/dif.sh@56 -- # cat 00:27:51.208 01:01:43 -- nvmf/common.sh@543 -- # cat 00:27:51.208 01:01:43 -- common/autotest_common.sh@1331 -- # ldd /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev 00:27:51.208 01:01:43 -- common/autotest_common.sh@1331 -- # grep libasan 00:27:51.208 01:01:43 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:27:51.208 01:01:43 -- target/dif.sh@72 -- # (( file = 1 )) 00:27:51.208 01:01:43 -- target/dif.sh@72 -- # (( file <= files )) 00:27:51.208 01:01:43 -- nvmf/common.sh@545 -- # jq . 00:27:51.208 01:01:43 -- nvmf/common.sh@546 -- # IFS=, 00:27:51.208 01:01:43 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:27:51.208 "params": { 00:27:51.208 "name": "Nvme0", 00:27:51.208 "trtype": "tcp", 00:27:51.208 "traddr": "10.0.0.2", 00:27:51.208 "adrfam": "ipv4", 00:27:51.208 "trsvcid": "4420", 00:27:51.208 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:51.208 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:51.208 "hdgst": true, 00:27:51.208 "ddgst": true 00:27:51.208 }, 00:27:51.208 "method": "bdev_nvme_attach_controller" 00:27:51.208 }' 00:27:51.208 01:01:43 -- common/autotest_common.sh@1331 -- # asan_lib=/usr/lib64/libasan.so.8 00:27:51.208 01:01:43 -- common/autotest_common.sh@1332 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:27:51.208 01:01:43 -- common/autotest_common.sh@1333 -- # break 00:27:51.208 01:01:43 -- common/autotest_common.sh@1338 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /var/jenkins/workspace/dsa-phy-autotest/spdk/build/fio/spdk_bdev' 00:27:51.208 01:01:43 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:27:51.774 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:27:51.774 ... 00:27:51.774 fio-3.35 00:27:51.774 Starting 3 threads 00:27:51.774 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.968 00:28:03.968 filename0: (groupid=0, jobs=1): err= 0: pid=2935161: Sat Apr 27 01:01:54 2024 00:28:03.968 read: IOPS=291, BW=36.4MiB/s (38.2MB/s)(366MiB/10046msec) 00:28:03.968 slat (nsec): min=5358, max=43083, avg=9442.93, stdev=2921.83 00:28:03.968 clat (usec): min=7616, max=51234, avg=10271.08, stdev=1458.30 00:28:03.968 lat (usec): min=7623, max=51242, avg=10280.52, stdev=1458.28 00:28:03.968 clat percentiles (usec): 00:28:03.968 | 1.00th=[ 8455], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9372], 00:28:03.968 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:28:03.968 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11731], 95.00th=[12518], 00:28:03.968 | 99.00th=[13173], 99.50th=[13566], 99.90th=[16581], 99.95th=[46924], 00:28:03.968 | 99.99th=[51119] 00:28:03.968 bw ( KiB/s): min=34304, max=39680, per=34.97%, avg=37440.00, stdev=1697.86, samples=20 00:28:03.968 iops : min= 268, max= 310, avg=292.50, stdev=13.26, samples=20 00:28:03.968 lat (msec) : 10=46.57%, 20=53.37%, 50=0.03%, 100=0.03% 00:28:03.968 cpu : usr=96.71%, sys=2.99%, ctx=13, majf=0, minf=1634 00:28:03.968 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.968 issued rwts: total=2927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.968 filename0: (groupid=0, jobs=1): err= 0: pid=2935162: Sat Apr 27 01:01:54 2024 00:28:03.968 read: IOPS=279, BW=34.9MiB/s (36.6MB/s)(351MiB/10045msec) 00:28:03.968 slat (usec): min=3, max=102, avg=12.39, stdev= 4.98 00:28:03.968 clat (usec): min=7871, max=51023, avg=10713.33, stdev=1592.64 00:28:03.968 lat (usec): min=7887, max=51034, avg=10725.72, stdev=1592.55 00:28:03.968 clat percentiles (usec): 00:28:03.968 | 1.00th=[ 8586], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[ 9765], 00:28:03.968 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:28:03.968 | 70.00th=[10945], 80.00th=[11469], 90.00th=[12518], 95.00th=[13304], 00:28:03.968 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15533], 99.95th=[46924], 00:28:03.968 | 99.99th=[51119] 00:28:03.968 bw ( KiB/s): min=32256, max=37888, per=33.52%, avg=35882.00, stdev=1651.88, samples=20 00:28:03.968 iops : min= 252, max= 296, avg=280.30, stdev=12.90, samples=20 00:28:03.968 lat (msec) : 10=30.52%, 20=69.41%, 50=0.04%, 100=0.04% 00:28:03.968 cpu : usr=96.10%, sys=3.03%, ctx=242, majf=0, minf=1633 00:28:03.968 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.968 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.968 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.968 issued rwts: total=2805,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.968 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.968 filename0: (groupid=0, jobs=1): err= 0: pid=2935163: Sat Apr 27 01:01:54 2024 00:28:03.968 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(334MiB/10043msec) 00:28:03.968 slat (nsec): min=4663, max=42159, avg=9692.10, stdev=3206.08 00:28:03.968 clat (usec): min=8408, max=53984, avg=11257.07, stdev=1570.37 00:28:03.968 lat (usec): min=8418, max=53992, avg=11266.77, stdev=1570.41 00:28:03.968 clat percentiles (usec): 00:28:03.968 | 1.00th=[ 9372], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:28:03.968 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:28:03.968 | 70.00th=[11469], 80.00th=[11994], 90.00th=[12911], 95.00th=[13698], 00:28:03.969 | 99.00th=[14877], 99.50th=[15139], 99.90th=[16057], 99.95th=[46924], 00:28:03.969 | 99.99th=[53740] 00:28:03.969 bw ( KiB/s): min=30976, max=35840, per=31.90%, avg=34150.40, stdev=1552.53, samples=20 00:28:03.969 iops : min= 242, max= 280, avg=266.80, stdev=12.13, samples=20 00:28:03.969 lat (msec) : 10=9.59%, 20=90.34%, 50=0.04%, 100=0.04% 00:28:03.969 cpu : usr=96.36%, sys=3.35%, ctx=13, majf=0, minf=1635 00:28:03.969 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:03.969 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.969 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:03.969 issued rwts: total=2670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:03.969 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:03.969 00:28:03.969 Run status group 0 (all jobs): 00:28:03.969 READ: bw=105MiB/s (110MB/s), 33.2MiB/s-36.4MiB/s (34.8MB/s-38.2MB/s), io=1050MiB (1101MB), run=10043-10046msec 00:28:03.969 ----------------------------------------------------- 00:28:03.969 Suppressions used: 00:28:03.969 count bytes template 00:28:03.969 5 44 /usr/src/fio/parse.c 00:28:03.969 1 8 libtcmalloc_minimal.so 00:28:03.969 1 904 libcrypto.so 00:28:03.969 ----------------------------------------------------- 00:28:03.969 00:28:03.969 01:01:55 -- target/dif.sh@132 -- # destroy_subsystems 0 00:28:03.969 01:01:55 -- target/dif.sh@43 -- # local sub 00:28:03.969 01:01:55 -- target/dif.sh@45 -- # for sub in "$@" 00:28:03.969 01:01:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:03.969 01:01:55 -- target/dif.sh@36 -- # local sub_id=0 00:28:03.969 01:01:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:03.969 01:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.969 01:01:55 -- common/autotest_common.sh@10 -- # set +x 00:28:03.969 01:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.969 01:01:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:03.969 01:01:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.969 01:01:55 -- common/autotest_common.sh@10 -- # set +x 00:28:03.969 01:01:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.969 00:28:03.969 real 0m11.920s 00:28:03.969 user 0m47.782s 00:28:03.969 sys 0m1.393s 00:28:03.969 01:01:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:03.969 01:01:55 -- common/autotest_common.sh@10 -- # set +x 00:28:03.969 ************************************ 00:28:03.969 END TEST fio_dif_digest 00:28:03.969 ************************************ 00:28:03.969 01:01:55 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:28:03.969 01:01:55 -- target/dif.sh@147 -- # nvmftestfini 00:28:03.969 01:01:55 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:03.969 01:01:55 -- nvmf/common.sh@117 -- # sync 00:28:03.969 01:01:55 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:03.969 01:01:55 -- nvmf/common.sh@120 -- # set +e 00:28:03.969 01:01:55 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:03.969 01:01:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:03.969 rmmod nvme_tcp 00:28:03.969 rmmod nvme_fabrics 00:28:03.969 rmmod nvme_keyring 00:28:03.969 01:01:55 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:03.969 01:01:55 -- nvmf/common.sh@124 -- # set -e 00:28:03.969 01:01:55 -- nvmf/common.sh@125 -- # return 0 00:28:03.969 01:01:55 -- nvmf/common.sh@478 -- # '[' -n 2923525 ']' 00:28:03.969 01:01:55 -- nvmf/common.sh@479 -- # killprocess 2923525 00:28:03.969 01:01:55 -- common/autotest_common.sh@936 -- # '[' -z 2923525 ']' 00:28:03.969 01:01:55 -- common/autotest_common.sh@940 -- # kill -0 2923525 00:28:03.969 01:01:55 -- common/autotest_common.sh@941 -- # uname 00:28:03.969 01:01:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:03.969 01:01:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2923525 00:28:03.969 01:01:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:03.969 01:01:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:03.969 01:01:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2923525' 00:28:03.969 killing process with pid 2923525 00:28:03.969 01:01:55 -- common/autotest_common.sh@955 -- # kill 2923525 00:28:03.969 01:01:55 -- common/autotest_common.sh@960 -- # wait 2923525 00:28:03.969 01:01:56 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:03.969 01:01:56 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.505 Waiting for block devices as requested 00:28:06.505 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:28:06.505 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:06.505 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:06.505 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:28:06.763 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:06.763 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:28:06.763 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:06.763 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.021 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:07.021 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.021 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:07.021 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.281 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.281 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.281 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:28:07.539 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:07.539 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.539 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:07.539 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:28:07.799 01:02:00 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:07.800 01:02:00 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:07.800 01:02:00 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:07.800 01:02:00 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:07.800 01:02:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:07.800 01:02:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:07.800 01:02:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.334 01:02:02 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:10.334 00:28:10.334 real 1m19.054s 00:28:10.334 user 8m23.287s 00:28:10.334 sys 0m17.448s 00:28:10.334 01:02:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:10.334 01:02:02 -- common/autotest_common.sh@10 -- # set +x 00:28:10.334 ************************************ 00:28:10.334 END TEST nvmf_dif 00:28:10.334 ************************************ 00:28:10.334 01:02:02 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:10.334 01:02:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:10.334 01:02:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:10.334 01:02:02 -- common/autotest_common.sh@10 -- # set +x 00:28:10.334 ************************************ 00:28:10.334 START TEST nvmf_abort_qd_sizes 00:28:10.334 ************************************ 00:28:10.334 01:02:02 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:28:10.334 * Looking for test storage... 00:28:10.334 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/target 00:28:10.334 01:02:02 -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:28:10.334 01:02:02 -- nvmf/common.sh@7 -- # uname -s 00:28:10.334 01:02:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:10.334 01:02:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:10.334 01:02:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:10.334 01:02:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:10.334 01:02:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:10.334 01:02:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:10.334 01:02:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:10.334 01:02:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:10.334 01:02:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:10.334 01:02:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:10.334 01:02:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:28:10.334 01:02:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:28:10.334 01:02:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:10.334 01:02:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:10.334 01:02:02 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:28:10.334 01:02:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:10.334 01:02:02 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:28:10.334 01:02:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:10.334 01:02:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:10.334 01:02:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:10.334 01:02:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.334 01:02:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.334 01:02:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.334 01:02:02 -- paths/export.sh@5 -- # export PATH 00:28:10.334 01:02:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:10.334 01:02:02 -- nvmf/common.sh@47 -- # : 0 00:28:10.334 01:02:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:10.334 01:02:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:10.334 01:02:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:10.334 01:02:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:10.334 01:02:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:10.334 01:02:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:10.334 01:02:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:10.334 01:02:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:10.334 01:02:02 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:28:10.334 01:02:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:10.334 01:02:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:10.334 01:02:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:10.334 01:02:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:10.334 01:02:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:10.334 01:02:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.334 01:02:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:10.334 01:02:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:10.334 01:02:02 -- nvmf/common.sh@403 -- # [[ phy-fallback != virt ]] 00:28:10.334 01:02:02 -- nvmf/common.sh@403 -- # gather_supported_nvmf_pci_devs 00:28:10.334 01:02:02 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:10.334 01:02:02 -- common/autotest_common.sh@10 -- # set +x 00:28:15.688 01:02:07 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:15.688 01:02:07 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:15.688 01:02:07 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:15.688 01:02:07 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:15.688 01:02:07 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:15.688 01:02:07 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:15.688 01:02:07 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:15.688 01:02:07 -- nvmf/common.sh@295 -- # net_devs=() 00:28:15.688 01:02:07 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:15.688 01:02:07 -- nvmf/common.sh@296 -- # e810=() 00:28:15.688 01:02:07 -- nvmf/common.sh@296 -- # local -ga e810 00:28:15.688 01:02:07 -- nvmf/common.sh@297 -- # x722=() 00:28:15.688 01:02:07 -- nvmf/common.sh@297 -- # local -ga x722 00:28:15.688 01:02:07 -- nvmf/common.sh@298 -- # mlx=() 00:28:15.688 01:02:07 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:15.688 01:02:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.688 01:02:07 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:15.688 01:02:07 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@327 -- # [[ '' == mlx5 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@329 -- # [[ '' == e810 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@331 -- # [[ '' == x722 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.688 01:02:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.0 (0x8086 - 0x159b)' 00:28:15.688 Found 0000:27:00.0 (0x8086 - 0x159b) 00:28:15.688 01:02:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:15.688 01:02:07 -- nvmf/common.sh@341 -- # echo 'Found 0000:27:00.1 (0x8086 - 0x159b)' 00:28:15.688 Found 0000:27:00.1 (0x8086 - 0x159b) 00:28:15.688 01:02:07 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@372 -- # [[ '' == e810 ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.688 01:02:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.688 01:02:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.688 01:02:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.0: cvl_0_0' 00:28:15.688 Found net devices under 0000:27:00.0: cvl_0_0 00:28:15.688 01:02:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.688 01:02:07 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:15.688 01:02:07 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.688 01:02:07 -- nvmf/common.sh@384 -- # (( 1 == 0 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@388 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.688 01:02:07 -- nvmf/common.sh@389 -- # echo 'Found net devices under 0000:27:00.1: cvl_0_1' 00:28:15.688 Found net devices under 0000:27:00.1: cvl_0_1 00:28:15.688 01:02:07 -- nvmf/common.sh@390 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.688 01:02:07 -- nvmf/common.sh@393 -- # (( 2 == 0 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@403 -- # is_hw=yes 00:28:15.688 01:02:07 -- nvmf/common.sh@405 -- # [[ yes == yes ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@406 -- # [[ tcp == tcp ]] 00:28:15.688 01:02:07 -- nvmf/common.sh@407 -- # nvmf_tcp_init 00:28:15.688 01:02:07 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:15.688 01:02:07 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.688 01:02:07 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:15.688 01:02:07 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:15.688 01:02:07 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:15.688 01:02:07 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:15.688 01:02:07 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:15.688 01:02:07 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:15.688 01:02:07 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.688 01:02:07 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:15.688 01:02:07 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:15.688 01:02:07 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:15.688 01:02:07 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:15.688 01:02:07 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:15.688 01:02:07 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:15.688 01:02:07 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:15.688 01:02:07 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:15.688 01:02:07 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:15.688 01:02:08 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:15.688 01:02:08 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:15.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.655 ms 00:28:15.688 00:28:15.688 --- 10.0.0.2 ping statistics --- 00:28:15.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.688 rtt min/avg/max/mdev = 0.655/0.655/0.655/0.000 ms 00:28:15.688 01:02:08 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:15.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:28:15.688 00:28:15.688 --- 10.0.0.1 ping statistics --- 00:28:15.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.688 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:28:15.688 01:02:08 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.688 01:02:08 -- nvmf/common.sh@411 -- # return 0 00:28:15.688 01:02:08 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:28:15.688 01:02:08 -- nvmf/common.sh@440 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:18.222 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.222 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.222 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:28:18.487 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:18.487 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:28:19.872 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:28:20.131 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:28:20.389 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:28:20.389 01:02:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:20.389 01:02:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:20.389 01:02:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:20.389 01:02:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:20.389 01:02:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:20.389 01:02:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:20.389 01:02:13 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:28:20.389 01:02:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:20.389 01:02:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:20.389 01:02:13 -- common/autotest_common.sh@10 -- # set +x 00:28:20.389 01:02:13 -- nvmf/common.sh@470 -- # nvmfpid=2944843 00:28:20.389 01:02:13 -- nvmf/common.sh@471 -- # waitforlisten 2944843 00:28:20.389 01:02:13 -- common/autotest_common.sh@817 -- # '[' -z 2944843 ']' 00:28:20.389 01:02:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.389 01:02:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:20.389 01:02:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.389 01:02:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:20.389 01:02:13 -- common/autotest_common.sh@10 -- # set +x 00:28:20.389 01:02:13 -- nvmf/common.sh@469 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:28:20.648 [2024-04-27 01:02:13.119415] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:28:20.648 [2024-04-27 01:02:13.119511] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.648 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.648 [2024-04-27 01:02:13.239367] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:20.648 [2024-04-27 01:02:13.338451] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.648 [2024-04-27 01:02:13.338488] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.648 [2024-04-27 01:02:13.338499] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.648 [2024-04-27 01:02:13.338509] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.648 [2024-04-27 01:02:13.338516] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.648 [2024-04-27 01:02:13.338590] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:20.648 [2024-04-27 01:02:13.338687] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:20.648 [2024-04-27 01:02:13.338786] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.648 [2024-04-27 01:02:13.338797] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:21.219 01:02:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:21.219 01:02:13 -- common/autotest_common.sh@850 -- # return 0 00:28:21.219 01:02:13 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:21.219 01:02:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:21.219 01:02:13 -- common/autotest_common.sh@10 -- # set +x 00:28:21.219 01:02:13 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:28:21.219 01:02:13 -- scripts/common.sh@309 -- # local bdf bdfs 00:28:21.219 01:02:13 -- scripts/common.sh@310 -- # local nvmes 00:28:21.219 01:02:13 -- scripts/common.sh@312 -- # [[ -n 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 ]] 00:28:21.219 01:02:13 -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:28:21.219 01:02:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:21.219 01:02:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:c9:00.0 ]] 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # uname -s 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:21.219 01:02:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:21.219 01:02:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:21.219 01:02:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:ca:00.0 ]] 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # uname -s 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:21.219 01:02:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:21.219 01:02:13 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:28:21.219 01:02:13 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:cb:00.0 ]] 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # uname -s 00:28:21.219 01:02:13 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:28:21.219 01:02:13 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:28:21.219 01:02:13 -- scripts/common.sh@325 -- # (( 3 )) 00:28:21.219 01:02:13 -- scripts/common.sh@326 -- # printf '%s\n' 0000:c9:00.0 0000:ca:00.0 0000:cb:00.0 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@76 -- # (( 3 > 0 )) 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:c9:00.0 00:28:21.219 01:02:13 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:28:21.219 01:02:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:21.219 01:02:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:21.219 01:02:13 -- common/autotest_common.sh@10 -- # set +x 00:28:21.479 ************************************ 00:28:21.479 START TEST spdk_target_abort 00:28:21.479 ************************************ 00:28:21.479 01:02:14 -- common/autotest_common.sh@1111 -- # spdk_target 00:28:21.479 01:02:14 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:28:21.480 01:02:14 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:c9:00.0 -b spdk_target 00:28:21.480 01:02:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.480 01:02:14 -- common/autotest_common.sh@10 -- # set +x 00:28:24.766 spdk_targetn1 00:28:24.766 01:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:24.766 01:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.766 01:02:16 -- common/autotest_common.sh@10 -- # set +x 00:28:24.766 [2024-04-27 01:02:16.852001] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:24.766 01:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:28:24.766 01:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.766 01:02:16 -- common/autotest_common.sh@10 -- # set +x 00:28:24.766 01:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:28:24.766 01:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.766 01:02:16 -- common/autotest_common.sh@10 -- # set +x 00:28:24.766 01:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:28:24.766 01:02:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.766 01:02:16 -- common/autotest_common.sh@10 -- # set +x 00:28:24.766 [2024-04-27 01:02:16.885881] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:24.766 01:02:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:24.766 01:02:16 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:24.767 01:02:16 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:24.767 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.056 Initializing NVMe Controllers 00:28:28.056 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:28.056 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:28.056 Initialization complete. Launching workers. 00:28:28.056 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 18413, failed: 0 00:28:28.056 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1684, failed to submit 16729 00:28:28.056 success 776, unsuccess 908, failed 0 00:28:28.056 01:02:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:28.056 01:02:20 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:28.056 EAL: No free 2048 kB hugepages reported on node 1 00:28:31.342 Initializing NVMe Controllers 00:28:31.342 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:31.342 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:31.342 Initialization complete. Launching workers. 00:28:31.342 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8598, failed: 0 00:28:31.342 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1227, failed to submit 7371 00:28:31.342 success 340, unsuccess 887, failed 0 00:28:31.342 01:02:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:31.342 01:02:23 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:31.342 EAL: No free 2048 kB hugepages reported on node 1 00:28:34.632 Initializing NVMe Controllers 00:28:34.632 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:28:34.632 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:34.632 Initialization complete. Launching workers. 00:28:34.632 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39186, failed: 0 00:28:34.632 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2647, failed to submit 36539 00:28:34.632 success 602, unsuccess 2045, failed 0 00:28:34.632 01:02:26 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:28:34.632 01:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.632 01:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:34.632 01:02:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:34.632 01:02:26 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:28:34.632 01:02:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:34.632 01:02:26 -- common/autotest_common.sh@10 -- # set +x 00:28:37.164 01:02:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:37.164 01:02:29 -- target/abort_qd_sizes.sh@61 -- # killprocess 2944843 00:28:37.164 01:02:29 -- common/autotest_common.sh@936 -- # '[' -z 2944843 ']' 00:28:37.164 01:02:29 -- common/autotest_common.sh@940 -- # kill -0 2944843 00:28:37.164 01:02:29 -- common/autotest_common.sh@941 -- # uname 00:28:37.164 01:02:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:37.164 01:02:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2944843 00:28:37.164 01:02:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:37.164 01:02:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:37.164 01:02:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2944843' 00:28:37.164 killing process with pid 2944843 00:28:37.164 01:02:29 -- common/autotest_common.sh@955 -- # kill 2944843 00:28:37.164 01:02:29 -- common/autotest_common.sh@960 -- # wait 2944843 00:28:37.164 00:28:37.164 real 0m15.667s 00:28:37.164 user 1m2.756s 00:28:37.164 sys 0m1.462s 00:28:37.164 01:02:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:37.164 01:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:37.164 ************************************ 00:28:37.164 END TEST spdk_target_abort 00:28:37.164 ************************************ 00:28:37.164 01:02:29 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:28:37.164 01:02:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:37.164 01:02:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:37.164 01:02:29 -- common/autotest_common.sh@10 -- # set +x 00:28:37.164 ************************************ 00:28:37.164 START TEST kernel_target_abort 00:28:37.164 ************************************ 00:28:37.164 01:02:29 -- common/autotest_common.sh@1111 -- # kernel_target 00:28:37.164 01:02:29 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:28:37.164 01:02:29 -- nvmf/common.sh@717 -- # local ip 00:28:37.164 01:02:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:37.164 01:02:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:37.164 01:02:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.164 01:02:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.164 01:02:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:37.164 01:02:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.164 01:02:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:37.164 01:02:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:37.164 01:02:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:37.164 01:02:29 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:37.164 01:02:29 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:37.164 01:02:29 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:28:37.164 01:02:29 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:37.164 01:02:29 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:37.164 01:02:29 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:37.164 01:02:29 -- nvmf/common.sh@628 -- # local block nvme 00:28:37.165 01:02:29 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:28:37.165 01:02:29 -- nvmf/common.sh@631 -- # modprobe nvmet 00:28:37.165 01:02:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:37.165 01:02:29 -- nvmf/common.sh@636 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:28:39.696 Waiting for block devices as requested 00:28:39.696 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:28:39.956 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:39.956 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:39.956 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:28:39.956 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:40.215 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.215 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:40.215 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.215 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:40.474 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.474 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:40.474 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.760 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.760 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:28:40.760 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:28:40.760 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:41.019 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:28:41.019 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:41.019 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:28:41.951 01:02:34 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:41.951 01:02:34 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:41.951 01:02:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:28:41.951 01:02:34 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:41.951 01:02:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:41.951 No valid GPT data, bailing 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # pt= 00:28:41.951 01:02:34 -- scripts/common.sh@392 -- # return 1 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:28:41.951 01:02:34 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:41.951 01:02:34 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:28:41.951 01:02:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:28:41.951 01:02:34 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:28:41.951 01:02:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme1n1 00:28:41.951 No valid GPT data, bailing 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # pt= 00:28:41.951 01:02:34 -- scripts/common.sh@392 -- # return 1 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:28:41.951 01:02:34 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:28:41.951 01:02:34 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme2n1 ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@641 -- # is_block_zoned nvme2n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1648 -- # local device=nvme2n1 00:28:41.951 01:02:34 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:28:41.951 01:02:34 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # block_in_use nvme2n1 00:28:41.951 01:02:34 -- scripts/common.sh@378 -- # local block=nvme2n1 pt 00:28:41.951 01:02:34 -- scripts/common.sh@387 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/spdk-gpt.py nvme2n1 00:28:41.951 No valid GPT data, bailing 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:28:41.951 01:02:34 -- scripts/common.sh@391 -- # pt= 00:28:41.951 01:02:34 -- scripts/common.sh@392 -- # return 1 00:28:41.951 01:02:34 -- nvmf/common.sh@642 -- # nvme=/dev/nvme2n1 00:28:41.951 01:02:34 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme2n1 ]] 00:28:41.951 01:02:34 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:41.951 01:02:34 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:41.951 01:02:34 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:41.951 01:02:34 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:41.951 01:02:34 -- nvmf/common.sh@656 -- # echo 1 00:28:41.951 01:02:34 -- nvmf/common.sh@657 -- # echo /dev/nvme2n1 00:28:41.951 01:02:34 -- nvmf/common.sh@658 -- # echo 1 00:28:41.951 01:02:34 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:28:41.951 01:02:34 -- nvmf/common.sh@661 -- # echo tcp 00:28:41.951 01:02:34 -- nvmf/common.sh@662 -- # echo 4420 00:28:41.951 01:02:34 -- nvmf/common.sh@663 -- # echo ipv4 00:28:41.951 01:02:34 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:41.951 01:02:34 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea --hostid=00a749c1-515c-ee11-906e-a4bf019734ea -a 10.0.0.1 -t tcp -s 4420 00:28:41.951 00:28:41.951 Discovery Log Number of Records 2, Generation counter 2 00:28:41.951 =====Discovery Log Entry 0====== 00:28:41.951 trtype: tcp 00:28:41.951 adrfam: ipv4 00:28:41.951 subtype: current discovery subsystem 00:28:41.951 treq: not specified, sq flow control disable supported 00:28:41.951 portid: 1 00:28:41.951 trsvcid: 4420 00:28:41.951 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:41.951 traddr: 10.0.0.1 00:28:41.951 eflags: none 00:28:41.951 sectype: none 00:28:41.951 =====Discovery Log Entry 1====== 00:28:41.951 trtype: tcp 00:28:41.951 adrfam: ipv4 00:28:41.951 subtype: nvme subsystem 00:28:41.951 treq: not specified, sq flow control disable supported 00:28:41.951 portid: 1 00:28:41.951 trsvcid: 4420 00:28:41.951 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:41.951 traddr: 10.0.0.1 00:28:41.951 eflags: none 00:28:41.951 sectype: none 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:28:41.951 01:02:34 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@24 -- # local target r 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:41.952 01:02:34 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:41.952 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.240 Initializing NVMe Controllers 00:28:45.240 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:45.240 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:45.240 Initialization complete. Launching workers. 00:28:45.240 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 82843, failed: 0 00:28:45.240 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 82843, failed to submit 0 00:28:45.240 success 0, unsuccess 82843, failed 0 00:28:45.240 01:02:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:45.240 01:02:37 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:45.240 EAL: No free 2048 kB hugepages reported on node 1 00:28:48.521 Initializing NVMe Controllers 00:28:48.521 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:48.521 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:48.521 Initialization complete. Launching workers. 00:28:48.521 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 131376, failed: 0 00:28:48.521 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32894, failed to submit 98482 00:28:48.521 success 0, unsuccess 32894, failed 0 00:28:48.521 01:02:40 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:28:48.521 01:02:40 -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:48.521 EAL: No free 2048 kB hugepages reported on node 1 00:28:51.806 Initializing NVMe Controllers 00:28:51.806 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:51.806 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:28:51.806 Initialization complete. Launching workers. 00:28:51.806 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 124286, failed: 0 00:28:51.806 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 31074, failed to submit 93212 00:28:51.806 success 0, unsuccess 31074, failed 0 00:28:51.806 01:02:43 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:28:51.806 01:02:43 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:51.806 01:02:43 -- nvmf/common.sh@675 -- # echo 0 00:28:51.806 01:02:43 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.806 01:02:43 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:51.806 01:02:43 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:51.806 01:02:43 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:51.806 01:02:43 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:51.806 01:02:43 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:51.806 01:02:43 -- nvmf/common.sh@687 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh 00:28:54.342 0000:74:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:f1:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:79:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:6f:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:6f:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:f6:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:f6:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:74:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:6a:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.342 0000:79:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:ec:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:6a:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.342 0000:ec:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.600 0000:e7:01.0 (8086 0b25): idxd -> vfio-pci 00:28:54.600 0000:e7:02.0 (8086 0cfe): idxd -> vfio-pci 00:28:54.600 0000:f1:01.0 (8086 0b25): idxd -> vfio-pci 00:28:56.506 0000:c9:00.0 (8086 0a54): nvme -> vfio-pci 00:28:56.506 0000:cb:00.0 (8086 0a54): nvme -> vfio-pci 00:28:56.506 0000:ca:00.0 (8086 0a54): nvme -> vfio-pci 00:28:56.506 00:28:56.506 real 0m19.281s 00:28:56.506 user 0m8.874s 00:28:56.506 sys 0m5.284s 00:28:56.506 01:02:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:56.506 01:02:49 -- common/autotest_common.sh@10 -- # set +x 00:28:56.506 ************************************ 00:28:56.506 END TEST kernel_target_abort 00:28:56.506 ************************************ 00:28:56.506 01:02:49 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:28:56.506 01:02:49 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:28:56.506 01:02:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:56.506 01:02:49 -- nvmf/common.sh@117 -- # sync 00:28:56.506 01:02:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:56.506 01:02:49 -- nvmf/common.sh@120 -- # set +e 00:28:56.506 01:02:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:56.506 01:02:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:56.506 rmmod nvme_tcp 00:28:56.506 rmmod nvme_fabrics 00:28:56.506 rmmod nvme_keyring 00:28:56.506 01:02:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:56.506 01:02:49 -- nvmf/common.sh@124 -- # set -e 00:28:56.506 01:02:49 -- nvmf/common.sh@125 -- # return 0 00:28:56.506 01:02:49 -- nvmf/common.sh@478 -- # '[' -n 2944843 ']' 00:28:56.506 01:02:49 -- nvmf/common.sh@479 -- # killprocess 2944843 00:28:56.506 01:02:49 -- common/autotest_common.sh@936 -- # '[' -z 2944843 ']' 00:28:56.506 01:02:49 -- common/autotest_common.sh@940 -- # kill -0 2944843 00:28:56.506 /var/jenkins/workspace/dsa-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (2944843) - No such process 00:28:56.506 01:02:49 -- common/autotest_common.sh@963 -- # echo 'Process with pid 2944843 is not found' 00:28:56.506 Process with pid 2944843 is not found 00:28:56.506 01:02:49 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:28:56.506 01:02:49 -- nvmf/common.sh@482 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/setup.sh reset 00:28:59.794 Waiting for block devices as requested 00:28:59.794 0000:c9:00.0 (8086 0a54): vfio-pci -> nvme 00:28:59.794 0000:74:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:59.794 0000:f1:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:59.794 0000:cb:00.0 (8086 0a54): vfio-pci -> nvme 00:28:59.794 0000:79:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:59.794 0000:6f:01.0 (8086 0b25): vfio-pci -> idxd 00:28:59.794 0000:6f:02.0 (8086 0cfe): vfio-pci -> idxd 00:28:59.794 0000:f6:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.053 0000:f6:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:00.053 0000:74:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.053 0000:6a:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:00.312 0000:79:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.312 0000:ec:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.312 0000:6a:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.312 0000:ca:00.0 (8086 0a54): vfio-pci -> nvme 00:29:00.571 0000:ec:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:00.571 0000:e7:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.571 0000:e7:02.0 (8086 0cfe): vfio-pci -> idxd 00:29:00.828 0000:f1:01.0 (8086 0b25): vfio-pci -> idxd 00:29:00.828 01:02:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:00.828 01:02:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:00.828 01:02:53 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:00.828 01:02:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:00.828 01:02:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.828 01:02:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:00.828 01:02:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:03.360 01:02:55 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:03.360 00:29:03.360 real 0m52.890s 00:29:03.360 user 1m15.502s 00:29:03.360 sys 0m14.846s 00:29:03.360 01:02:55 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:03.360 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:29:03.360 ************************************ 00:29:03.360 END TEST nvmf_abort_qd_sizes 00:29:03.360 ************************************ 00:29:03.360 01:02:55 -- spdk/autotest.sh@293 -- # run_test keyring_file /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:29:03.360 01:02:55 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:03.360 01:02:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:03.360 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:29:03.360 ************************************ 00:29:03.360 START TEST keyring_file 00:29:03.360 ************************************ 00:29:03.360 01:02:55 -- common/autotest_common.sh@1111 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/file.sh 00:29:03.360 * Looking for test storage... 00:29:03.360 * Found test storage at /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring 00:29:03.360 01:02:55 -- keyring/file.sh@11 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/keyring/common.sh 00:29:03.360 01:02:55 -- keyring/common.sh@4 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/test/nvmf/common.sh 00:29:03.360 01:02:55 -- nvmf/common.sh@7 -- # uname -s 00:29:03.360 01:02:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:03.360 01:02:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:03.360 01:02:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:03.360 01:02:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:03.360 01:02:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:03.360 01:02:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:03.360 01:02:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:03.360 01:02:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:03.360 01:02:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:03.360 01:02:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:03.360 01:02:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00a749c1-515c-ee11-906e-a4bf019734ea 00:29:03.360 01:02:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=00a749c1-515c-ee11-906e-a4bf019734ea 00:29:03.360 01:02:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:03.360 01:02:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:03.360 01:02:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:29:03.360 01:02:55 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:03.360 01:02:55 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:03.360 01:02:55 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:03.360 01:02:55 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:03.360 01:02:55 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:03.360 01:02:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.360 01:02:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.360 01:02:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.360 01:02:55 -- paths/export.sh@5 -- # export PATH 00:29:03.360 01:02:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:03.360 01:02:55 -- nvmf/common.sh@47 -- # : 0 00:29:03.360 01:02:55 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:03.360 01:02:55 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:03.360 01:02:55 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:03.360 01:02:55 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:03.360 01:02:55 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:03.360 01:02:55 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:03.360 01:02:55 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:03.360 01:02:55 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:03.360 01:02:55 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:29:03.360 01:02:55 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:29:03.360 01:02:55 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:29:03.360 01:02:55 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:29:03.360 01:02:55 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:29:03.360 01:02:55 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:29:03.360 01:02:55 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:03.360 01:02:55 -- keyring/common.sh@15 -- # local name key digest path 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # name=key0 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # digest=0 00:29:03.360 01:02:55 -- keyring/common.sh@18 -- # mktemp 00:29:03.360 01:02:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.M3CE6qRBSs 00:29:03.360 01:02:55 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:03.360 01:02:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:03.360 01:02:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:03.360 01:02:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:03.360 01:02:55 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:03.360 01:02:55 -- nvmf/common.sh@693 -- # digest=0 00:29:03.360 01:02:55 -- nvmf/common.sh@694 -- # python - 00:29:03.360 01:02:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M3CE6qRBSs 00:29:03.360 01:02:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.M3CE6qRBSs 00:29:03.360 01:02:55 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.M3CE6qRBSs 00:29:03.360 01:02:55 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:29:03.360 01:02:55 -- keyring/common.sh@15 -- # local name key digest path 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # name=key1 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:29:03.360 01:02:55 -- keyring/common.sh@17 -- # digest=0 00:29:03.360 01:02:55 -- keyring/common.sh@18 -- # mktemp 00:29:03.360 01:02:55 -- keyring/common.sh@18 -- # path=/tmp/tmp.3PGxk4DNdf 00:29:03.360 01:02:55 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:29:03.360 01:02:55 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:29:03.360 01:02:55 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:03.360 01:02:55 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:03.361 01:02:55 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:29:03.361 01:02:55 -- nvmf/common.sh@693 -- # digest=0 00:29:03.361 01:02:55 -- nvmf/common.sh@694 -- # python - 00:29:03.361 01:02:55 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.3PGxk4DNdf 00:29:03.361 01:02:55 -- keyring/common.sh@23 -- # echo /tmp/tmp.3PGxk4DNdf 00:29:03.361 01:02:55 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.3PGxk4DNdf 00:29:03.361 01:02:55 -- keyring/file.sh@30 -- # tgtpid=2956310 00:29:03.361 01:02:55 -- keyring/file.sh@32 -- # waitforlisten 2956310 00:29:03.361 01:02:55 -- keyring/file.sh@29 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/bin/spdk_tgt 00:29:03.361 01:02:55 -- common/autotest_common.sh@817 -- # '[' -z 2956310 ']' 00:29:03.361 01:02:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:03.361 01:02:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:03.361 01:02:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:03.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:03.361 01:02:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:03.361 01:02:55 -- common/autotest_common.sh@10 -- # set +x 00:29:03.361 [2024-04-27 01:02:55.854778] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:29:03.361 [2024-04-27 01:02:55.854894] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956310 ] 00:29:03.361 EAL: No free 2048 kB hugepages reported on node 1 00:29:03.361 [2024-04-27 01:02:55.970546] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:03.618 [2024-04-27 01:02:56.066602] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:03.878 01:02:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:03.878 01:02:56 -- common/autotest_common.sh@850 -- # return 0 00:29:03.878 01:02:56 -- keyring/file.sh@33 -- # rpc_cmd 00:29:03.878 01:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:03.878 01:02:56 -- common/autotest_common.sh@10 -- # set +x 00:29:03.878 [2024-04-27 01:02:56.546162] tcp.c: 669:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:03.878 null0 00:29:04.139 [2024-04-27 01:02:56.578166] tcp.c: 925:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:29:04.139 [2024-04-27 01:02:56.578420] tcp.c: 964:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:04.139 [2024-04-27 01:02:56.586155] tcp.c:3652:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:29:04.139 01:02:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:04.139 01:02:56 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:04.139 01:02:56 -- common/autotest_common.sh@638 -- # local es=0 00:29:04.139 01:02:56 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:04.139 01:02:56 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:04.139 01:02:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.139 01:02:56 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:04.139 01:02:56 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:04.139 01:02:56 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:29:04.139 01:02:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:04.139 01:02:56 -- common/autotest_common.sh@10 -- # set +x 00:29:04.139 [2024-04-27 01:02:56.598153] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.request: 00:29:04.139 { 00:29:04.139 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:29:04.139 "secure_channel": false, 00:29:04.139 "listen_address": { 00:29:04.139 "trtype": "tcp", 00:29:04.139 "traddr": "127.0.0.1", 00:29:04.139 "trsvcid": "4420" 00:29:04.139 }, 00:29:04.139 "method": "nvmf_subsystem_add_listener", 00:29:04.139 "req_id": 1 00:29:04.139 } 00:29:04.139 Got JSON-RPC error response 00:29:04.139 response: 00:29:04.139 { 00:29:04.139 "code": -32602, 00:29:04.139 "message": "Invalid parameters" 00:29:04.139 } 00:29:04.139 01:02:56 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:04.140 01:02:56 -- common/autotest_common.sh@641 -- # es=1 00:29:04.140 01:02:56 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:04.140 01:02:56 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:04.140 01:02:56 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:04.140 01:02:56 -- keyring/file.sh@46 -- # bperfpid=2956329 00:29:04.140 01:02:56 -- keyring/file.sh@48 -- # waitforlisten 2956329 /var/tmp/bperf.sock 00:29:04.140 01:02:56 -- common/autotest_common.sh@817 -- # '[' -z 2956329 ']' 00:29:04.140 01:02:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:04.140 01:02:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:04.140 01:02:56 -- keyring/file.sh@45 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:29:04.140 01:02:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:04.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:04.140 01:02:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:04.140 01:02:56 -- common/autotest_common.sh@10 -- # set +x 00:29:04.140 [2024-04-27 01:02:56.670798] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:29:04.140 [2024-04-27 01:02:56.670905] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2956329 ] 00:29:04.140 EAL: No free 2048 kB hugepages reported on node 1 00:29:04.140 [2024-04-27 01:02:56.806613] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.400 [2024-04-27 01:02:56.945579] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:04.970 01:02:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:04.970 01:02:57 -- common/autotest_common.sh@850 -- # return 0 00:29:04.970 01:02:57 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:04.970 01:02:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:04.970 01:02:57 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3PGxk4DNdf 00:29:04.970 01:02:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3PGxk4DNdf 00:29:05.310 01:02:57 -- keyring/file.sh@51 -- # get_key key0 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.310 01:02:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.310 01:02:57 -- keyring/file.sh@51 -- # jq -r .path 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.310 01:02:57 -- keyring/file.sh@51 -- # [[ /tmp/tmp.M3CE6qRBSs == \/\t\m\p\/\t\m\p\.\M\3\C\E\6\q\R\B\S\s ]] 00:29:05.310 01:02:57 -- keyring/file.sh@52 -- # get_key key1 00:29:05.310 01:02:57 -- keyring/file.sh@52 -- # jq -r .path 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.310 01:02:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.310 01:02:57 -- keyring/file.sh@52 -- # [[ /tmp/tmp.3PGxk4DNdf == \/\t\m\p\/\t\m\p\.\3\P\G\x\k\4\D\N\d\f ]] 00:29:05.310 01:02:57 -- keyring/file.sh@53 -- # get_refcnt key0 00:29:05.310 01:02:57 -- keyring/common.sh@12 -- # get_key key0 00:29:05.310 01:02:57 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.310 01:02:57 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.310 01:02:57 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.568 01:02:58 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:29:05.568 01:02:58 -- keyring/file.sh@54 -- # get_refcnt key1 00:29:05.568 01:02:58 -- keyring/common.sh@12 -- # get_key key1 00:29:05.568 01:02:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.568 01:02:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.568 01:02:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:05.568 01:02:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:05.568 01:02:58 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:29:05.568 01:02:58 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.568 01:02:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:05.827 [2024-04-27 01:02:58.338702] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:05.827 nvme0n1 00:29:05.827 01:02:58 -- keyring/file.sh@59 -- # get_refcnt key0 00:29:05.827 01:02:58 -- keyring/common.sh@12 -- # get_key key0 00:29:05.827 01:02:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:05.827 01:02:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:05.827 01:02:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:05.827 01:02:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.086 01:02:58 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:29:06.086 01:02:58 -- keyring/file.sh@60 -- # get_refcnt key1 00:29:06.086 01:02:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:06.086 01:02:58 -- keyring/common.sh@12 -- # get_key key1 00:29:06.086 01:02:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:06.086 01:02:58 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:06.086 01:02:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:06.086 01:02:58 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:29:06.087 01:02:58 -- keyring/file.sh@62 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.087 Running I/O for 1 seconds... 00:29:07.462 00:29:07.462 Latency(us) 00:29:07.462 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:07.462 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:29:07.462 nvme0n1 : 1.00 16633.71 64.98 0.00 0.00 7674.01 4277.09 13659.08 00:29:07.462 =================================================================================================================== 00:29:07.462 Total : 16633.71 64.98 0.00 0.00 7674.01 4277.09 13659.08 00:29:07.462 0 00:29:07.462 01:02:59 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:07.462 01:02:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:07.462 01:02:59 -- keyring/file.sh@65 -- # get_refcnt key0 00:29:07.462 01:02:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.462 01:02:59 -- keyring/common.sh@12 -- # get_key key0 00:29:07.462 01:02:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.462 01:02:59 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.462 01:02:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.462 01:03:00 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:29:07.462 01:03:00 -- keyring/file.sh@66 -- # get_refcnt key1 00:29:07.462 01:03:00 -- keyring/common.sh@12 -- # get_key key1 00:29:07.462 01:03:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.462 01:03:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.462 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.462 01:03:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:07.719 01:03:00 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:29:07.719 01:03:00 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:07.719 01:03:00 -- common/autotest_common.sh@638 -- # local es=0 00:29:07.720 01:03:00 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:07.720 01:03:00 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:07.720 01:03:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:07.720 01:03:00 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:07.720 01:03:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:07.720 01:03:00 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:07.720 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:29:07.720 [2024-04-27 01:03:00.352816] /var/jenkins/workspace/dsa-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:29:07.720 [2024-04-27 01:03:00.353042] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (107): Transport endpoint is not connected 00:29:07.720 [2024-04-27 01:03:00.354021] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x614000009240 (9): Bad file descriptor 00:29:07.720 [2024-04-27 01:03:00.355016] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:07.720 [2024-04-27 01:03:00.355031] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:29:07.720 [2024-04-27 01:03:00.355041] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:07.720 request: 00:29:07.720 { 00:29:07.720 "name": "nvme0", 00:29:07.720 "trtype": "tcp", 00:29:07.720 "traddr": "127.0.0.1", 00:29:07.720 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:07.720 "adrfam": "ipv4", 00:29:07.720 "trsvcid": "4420", 00:29:07.720 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:07.720 "psk": "key1", 00:29:07.720 "method": "bdev_nvme_attach_controller", 00:29:07.720 "req_id": 1 00:29:07.720 } 00:29:07.720 Got JSON-RPC error response 00:29:07.720 response: 00:29:07.720 { 00:29:07.720 "code": -32602, 00:29:07.720 "message": "Invalid parameters" 00:29:07.720 } 00:29:07.720 01:03:00 -- common/autotest_common.sh@641 -- # es=1 00:29:07.720 01:03:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:07.720 01:03:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:07.720 01:03:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:07.720 01:03:00 -- keyring/file.sh@71 -- # get_refcnt key0 00:29:07.720 01:03:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.720 01:03:00 -- keyring/common.sh@12 -- # get_key key0 00:29:07.720 01:03:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.720 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.720 01:03:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:07.977 01:03:00 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:29:07.977 01:03:00 -- keyring/file.sh@72 -- # get_refcnt key1 00:29:07.977 01:03:00 -- keyring/common.sh@12 -- # get_key key1 00:29:07.977 01:03:00 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:07.977 01:03:00 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:07.977 01:03:00 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:07.977 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:07.977 01:03:00 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:29:07.977 01:03:00 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:29:07.977 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:08.236 01:03:00 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:29:08.236 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:29:08.236 01:03:00 -- keyring/file.sh@77 -- # jq length 00:29:08.236 01:03:00 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:29:08.236 01:03:00 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:08.496 01:03:01 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:29:08.496 01:03:01 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.M3CE6qRBSs 00:29:08.496 01:03:01 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.496 01:03:01 -- common/autotest_common.sh@638 -- # local es=0 00:29:08.496 01:03:01 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.496 01:03:01 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:08.496 01:03:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:08.496 01:03:01 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:08.496 01:03:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:08.496 01:03:01 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.496 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.496 [2024-04-27 01:03:01.193049] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.M3CE6qRBSs': 0100660 00:29:08.496 [2024-04-27 01:03:01.193085] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:29:08.756 request: 00:29:08.756 { 00:29:08.756 "name": "key0", 00:29:08.756 "path": "/tmp/tmp.M3CE6qRBSs", 00:29:08.756 "method": "keyring_file_add_key", 00:29:08.756 "req_id": 1 00:29:08.756 } 00:29:08.756 Got JSON-RPC error response 00:29:08.756 response: 00:29:08.756 { 00:29:08.756 "code": -1, 00:29:08.756 "message": "Operation not permitted" 00:29:08.756 } 00:29:08.756 01:03:01 -- common/autotest_common.sh@641 -- # es=1 00:29:08.756 01:03:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:08.756 01:03:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:08.756 01:03:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:08.756 01:03:01 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.M3CE6qRBSs 00:29:08.756 01:03:01 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.756 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.M3CE6qRBSs 00:29:08.756 01:03:01 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.M3CE6qRBSs 00:29:08.756 01:03:01 -- keyring/file.sh@88 -- # get_refcnt key0 00:29:08.756 01:03:01 -- keyring/common.sh@12 -- # get_key key0 00:29:08.756 01:03:01 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:08.756 01:03:01 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:08.756 01:03:01 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:08.756 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.015 01:03:01 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:29:09.015 01:03:01 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.015 01:03:01 -- common/autotest_common.sh@638 -- # local es=0 00:29:09.015 01:03:01 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.015 01:03:01 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:29:09.015 01:03:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.015 01:03:01 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:29:09.015 01:03:01 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:09.015 01:03:01 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.015 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.015 [2024-04-27 01:03:01.661180] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.M3CE6qRBSs': No such file or directory 00:29:09.015 [2024-04-27 01:03:01.661213] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:29:09.015 [2024-04-27 01:03:01.661246] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:29:09.015 [2024-04-27 01:03:01.661259] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:09.015 [2024-04-27 01:03:01.661269] bdev_nvme.c:6208:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:29:09.015 request: 00:29:09.015 { 00:29:09.015 "name": "nvme0", 00:29:09.015 "trtype": "tcp", 00:29:09.015 "traddr": "127.0.0.1", 00:29:09.015 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:09.015 "adrfam": "ipv4", 00:29:09.015 "trsvcid": "4420", 00:29:09.015 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:09.015 "psk": "key0", 00:29:09.015 "method": "bdev_nvme_attach_controller", 00:29:09.015 "req_id": 1 00:29:09.015 } 00:29:09.015 Got JSON-RPC error response 00:29:09.015 response: 00:29:09.015 { 00:29:09.015 "code": -19, 00:29:09.015 "message": "No such device" 00:29:09.015 } 00:29:09.015 01:03:01 -- common/autotest_common.sh@641 -- # es=1 00:29:09.015 01:03:01 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:09.015 01:03:01 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:09.015 01:03:01 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:09.015 01:03:01 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:29:09.015 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:09.274 01:03:01 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:29:09.274 01:03:01 -- keyring/common.sh@15 -- # local name key digest path 00:29:09.274 01:03:01 -- keyring/common.sh@17 -- # name=key0 00:29:09.274 01:03:01 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:29:09.274 01:03:01 -- keyring/common.sh@17 -- # digest=0 00:29:09.274 01:03:01 -- keyring/common.sh@18 -- # mktemp 00:29:09.274 01:03:01 -- keyring/common.sh@18 -- # path=/tmp/tmp.L3J02U4o4h 00:29:09.274 01:03:01 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:29:09.274 01:03:01 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:29:09.274 01:03:01 -- nvmf/common.sh@691 -- # local prefix key digest 00:29:09.274 01:03:01 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:29:09.274 01:03:01 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:29:09.274 01:03:01 -- nvmf/common.sh@693 -- # digest=0 00:29:09.274 01:03:01 -- nvmf/common.sh@694 -- # python - 00:29:09.274 01:03:01 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.L3J02U4o4h 00:29:09.274 01:03:01 -- keyring/common.sh@23 -- # echo /tmp/tmp.L3J02U4o4h 00:29:09.274 01:03:01 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.L3J02U4o4h 00:29:09.274 01:03:01 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.L3J02U4o4h 00:29:09.274 01:03:01 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.L3J02U4o4h 00:29:09.532 01:03:02 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.532 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:09.789 nvme0n1 00:29:09.789 01:03:02 -- keyring/file.sh@99 -- # get_refcnt key0 00:29:09.789 01:03:02 -- keyring/common.sh@12 -- # get_key key0 00:29:09.789 01:03:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:09.789 01:03:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:09.789 01:03:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:09.789 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:09.789 01:03:02 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:29:09.789 01:03:02 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:29:09.789 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:29:10.047 01:03:02 -- keyring/file.sh@101 -- # get_key key0 00:29:10.047 01:03:02 -- keyring/file.sh@101 -- # jq -r .removed 00:29:10.047 01:03:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.047 01:03:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.047 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.047 01:03:02 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:29:10.047 01:03:02 -- keyring/file.sh@102 -- # get_refcnt key0 00:29:10.047 01:03:02 -- keyring/common.sh@12 -- # get_key key0 00:29:10.047 01:03:02 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:10.047 01:03:02 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:10.047 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.047 01:03:02 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:10.305 01:03:02 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:29:10.305 01:03:02 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:29:10.305 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:29:10.305 01:03:02 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:29:10.305 01:03:02 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:10.305 01:03:02 -- keyring/file.sh@104 -- # jq length 00:29:10.564 01:03:03 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:29:10.564 01:03:03 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.L3J02U4o4h 00:29:10.564 01:03:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.L3J02U4o4h 00:29:10.564 01:03:03 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.3PGxk4DNdf 00:29:10.564 01:03:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.3PGxk4DNdf 00:29:10.821 01:03:03 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:10.821 01:03:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:29:11.081 nvme0n1 00:29:11.081 01:03:03 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:29:11.081 01:03:03 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:29:11.081 01:03:03 -- keyring/file.sh@112 -- # config='{ 00:29:11.081 "subsystems": [ 00:29:11.081 { 00:29:11.081 "subsystem": "keyring", 00:29:11.081 "config": [ 00:29:11.081 { 00:29:11.081 "method": "keyring_file_add_key", 00:29:11.081 "params": { 00:29:11.081 "name": "key0", 00:29:11.081 "path": "/tmp/tmp.L3J02U4o4h" 00:29:11.081 } 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "method": "keyring_file_add_key", 00:29:11.081 "params": { 00:29:11.081 "name": "key1", 00:29:11.081 "path": "/tmp/tmp.3PGxk4DNdf" 00:29:11.081 } 00:29:11.081 } 00:29:11.081 ] 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "subsystem": "iobuf", 00:29:11.081 "config": [ 00:29:11.081 { 00:29:11.081 "method": "iobuf_set_options", 00:29:11.081 "params": { 00:29:11.081 "small_pool_count": 8192, 00:29:11.081 "large_pool_count": 1024, 00:29:11.081 "small_bufsize": 8192, 00:29:11.081 "large_bufsize": 135168 00:29:11.081 } 00:29:11.081 } 00:29:11.081 ] 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "subsystem": "sock", 00:29:11.081 "config": [ 00:29:11.081 { 00:29:11.081 "method": "sock_impl_set_options", 00:29:11.081 "params": { 00:29:11.081 "impl_name": "posix", 00:29:11.081 "recv_buf_size": 2097152, 00:29:11.081 "send_buf_size": 2097152, 00:29:11.081 "enable_recv_pipe": true, 00:29:11.081 "enable_quickack": false, 00:29:11.081 "enable_placement_id": 0, 00:29:11.081 "enable_zerocopy_send_server": true, 00:29:11.081 "enable_zerocopy_send_client": false, 00:29:11.081 "zerocopy_threshold": 0, 00:29:11.081 "tls_version": 0, 00:29:11.081 "enable_ktls": false 00:29:11.081 } 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "method": "sock_impl_set_options", 00:29:11.081 "params": { 00:29:11.081 "impl_name": "ssl", 00:29:11.081 "recv_buf_size": 4096, 00:29:11.081 "send_buf_size": 4096, 00:29:11.081 "enable_recv_pipe": true, 00:29:11.081 "enable_quickack": false, 00:29:11.081 "enable_placement_id": 0, 00:29:11.081 "enable_zerocopy_send_server": true, 00:29:11.081 "enable_zerocopy_send_client": false, 00:29:11.081 "zerocopy_threshold": 0, 00:29:11.081 "tls_version": 0, 00:29:11.081 "enable_ktls": false 00:29:11.081 } 00:29:11.081 } 00:29:11.081 ] 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "subsystem": "vmd", 00:29:11.081 "config": [] 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "subsystem": "accel", 00:29:11.081 "config": [ 00:29:11.081 { 00:29:11.081 "method": "accel_set_options", 00:29:11.081 "params": { 00:29:11.081 "small_cache_size": 128, 00:29:11.081 "large_cache_size": 16, 00:29:11.081 "task_count": 2048, 00:29:11.081 "sequence_count": 2048, 00:29:11.081 "buf_count": 2048 00:29:11.081 } 00:29:11.081 } 00:29:11.081 ] 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "subsystem": "bdev", 00:29:11.081 "config": [ 00:29:11.081 { 00:29:11.081 "method": "bdev_set_options", 00:29:11.081 "params": { 00:29:11.081 "bdev_io_pool_size": 65535, 00:29:11.081 "bdev_io_cache_size": 256, 00:29:11.081 "bdev_auto_examine": true, 00:29:11.081 "iobuf_small_cache_size": 128, 00:29:11.081 "iobuf_large_cache_size": 16 00:29:11.081 } 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "method": "bdev_raid_set_options", 00:29:11.081 "params": { 00:29:11.081 "process_window_size_kb": 1024 00:29:11.081 } 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "method": "bdev_iscsi_set_options", 00:29:11.081 "params": { 00:29:11.081 "timeout_sec": 30 00:29:11.081 } 00:29:11.081 }, 00:29:11.081 { 00:29:11.081 "method": "bdev_nvme_set_options", 00:29:11.081 "params": { 00:29:11.081 "action_on_timeout": "none", 00:29:11.081 "timeout_us": 0, 00:29:11.081 "timeout_admin_us": 0, 00:29:11.081 "keep_alive_timeout_ms": 10000, 00:29:11.081 "arbitration_burst": 0, 00:29:11.081 "low_priority_weight": 0, 00:29:11.081 "medium_priority_weight": 0, 00:29:11.081 "high_priority_weight": 0, 00:29:11.081 "nvme_adminq_poll_period_us": 10000, 00:29:11.081 "nvme_ioq_poll_period_us": 0, 00:29:11.081 "io_queue_requests": 512, 00:29:11.081 "delay_cmd_submit": true, 00:29:11.081 "transport_retry_count": 4, 00:29:11.081 "bdev_retry_count": 3, 00:29:11.081 "transport_ack_timeout": 0, 00:29:11.081 "ctrlr_loss_timeout_sec": 0, 00:29:11.081 "reconnect_delay_sec": 0, 00:29:11.081 "fast_io_fail_timeout_sec": 0, 00:29:11.082 "disable_auto_failback": false, 00:29:11.082 "generate_uuids": false, 00:29:11.082 "transport_tos": 0, 00:29:11.082 "nvme_error_stat": false, 00:29:11.082 "rdma_srq_size": 0, 00:29:11.082 "io_path_stat": false, 00:29:11.082 "allow_accel_sequence": false, 00:29:11.082 "rdma_max_cq_size": 0, 00:29:11.082 "rdma_cm_event_timeout_ms": 0, 00:29:11.082 "dhchap_digests": [ 00:29:11.082 "sha256", 00:29:11.082 "sha384", 00:29:11.082 "sha512" 00:29:11.082 ], 00:29:11.082 "dhchap_dhgroups": [ 00:29:11.082 "null", 00:29:11.082 "ffdhe2048", 00:29:11.082 "ffdhe3072", 00:29:11.082 "ffdhe4096", 00:29:11.082 "ffdhe6144", 00:29:11.082 "ffdhe8192" 00:29:11.082 ] 00:29:11.082 } 00:29:11.082 }, 00:29:11.082 { 00:29:11.082 "method": "bdev_nvme_attach_controller", 00:29:11.082 "params": { 00:29:11.082 "name": "nvme0", 00:29:11.082 "trtype": "TCP", 00:29:11.082 "adrfam": "IPv4", 00:29:11.082 "traddr": "127.0.0.1", 00:29:11.082 "trsvcid": "4420", 00:29:11.082 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.082 "prchk_reftag": false, 00:29:11.082 "prchk_guard": false, 00:29:11.082 "ctrlr_loss_timeout_sec": 0, 00:29:11.082 "reconnect_delay_sec": 0, 00:29:11.082 "fast_io_fail_timeout_sec": 0, 00:29:11.082 "psk": "key0", 00:29:11.082 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.082 "hdgst": false, 00:29:11.082 "ddgst": false 00:29:11.082 } 00:29:11.082 }, 00:29:11.082 { 00:29:11.082 "method": "bdev_nvme_set_hotplug", 00:29:11.082 "params": { 00:29:11.082 "period_us": 100000, 00:29:11.082 "enable": false 00:29:11.082 } 00:29:11.082 }, 00:29:11.082 { 00:29:11.082 "method": "bdev_wait_for_examine" 00:29:11.082 } 00:29:11.082 ] 00:29:11.082 }, 00:29:11.082 { 00:29:11.082 "subsystem": "nbd", 00:29:11.082 "config": [] 00:29:11.082 } 00:29:11.082 ] 00:29:11.082 }' 00:29:11.082 01:03:03 -- keyring/file.sh@114 -- # killprocess 2956329 00:29:11.082 01:03:03 -- common/autotest_common.sh@936 -- # '[' -z 2956329 ']' 00:29:11.082 01:03:03 -- common/autotest_common.sh@940 -- # kill -0 2956329 00:29:11.082 01:03:03 -- common/autotest_common.sh@941 -- # uname 00:29:11.082 01:03:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:11.082 01:03:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2956329 00:29:11.340 01:03:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:11.340 01:03:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:11.340 01:03:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2956329' 00:29:11.340 killing process with pid 2956329 00:29:11.340 01:03:03 -- common/autotest_common.sh@955 -- # kill 2956329 00:29:11.340 Received shutdown signal, test time was about 1.000000 seconds 00:29:11.340 00:29:11.340 Latency(us) 00:29:11.340 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.340 =================================================================================================================== 00:29:11.340 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:11.340 01:03:03 -- common/autotest_common.sh@960 -- # wait 2956329 00:29:11.598 01:03:04 -- keyring/file.sh@117 -- # bperfpid=2958099 00:29:11.598 01:03:04 -- keyring/file.sh@119 -- # waitforlisten 2958099 /var/tmp/bperf.sock 00:29:11.598 01:03:04 -- common/autotest_common.sh@817 -- # '[' -z 2958099 ']' 00:29:11.598 01:03:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:11.598 01:03:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:11.598 01:03:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:11.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:11.598 01:03:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:11.598 01:03:04 -- common/autotest_common.sh@10 -- # set +x 00:29:11.598 01:03:04 -- keyring/file.sh@115 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:29:11.598 01:03:04 -- keyring/file.sh@115 -- # echo '{ 00:29:11.598 "subsystems": [ 00:29:11.598 { 00:29:11.598 "subsystem": "keyring", 00:29:11.598 "config": [ 00:29:11.598 { 00:29:11.598 "method": "keyring_file_add_key", 00:29:11.598 "params": { 00:29:11.598 "name": "key0", 00:29:11.598 "path": "/tmp/tmp.L3J02U4o4h" 00:29:11.598 } 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "method": "keyring_file_add_key", 00:29:11.598 "params": { 00:29:11.598 "name": "key1", 00:29:11.598 "path": "/tmp/tmp.3PGxk4DNdf" 00:29:11.598 } 00:29:11.598 } 00:29:11.598 ] 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "subsystem": "iobuf", 00:29:11.598 "config": [ 00:29:11.598 { 00:29:11.598 "method": "iobuf_set_options", 00:29:11.598 "params": { 00:29:11.598 "small_pool_count": 8192, 00:29:11.598 "large_pool_count": 1024, 00:29:11.598 "small_bufsize": 8192, 00:29:11.598 "large_bufsize": 135168 00:29:11.598 } 00:29:11.598 } 00:29:11.598 ] 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "subsystem": "sock", 00:29:11.598 "config": [ 00:29:11.598 { 00:29:11.598 "method": "sock_impl_set_options", 00:29:11.598 "params": { 00:29:11.598 "impl_name": "posix", 00:29:11.598 "recv_buf_size": 2097152, 00:29:11.598 "send_buf_size": 2097152, 00:29:11.598 "enable_recv_pipe": true, 00:29:11.598 "enable_quickack": false, 00:29:11.598 "enable_placement_id": 0, 00:29:11.598 "enable_zerocopy_send_server": true, 00:29:11.598 "enable_zerocopy_send_client": false, 00:29:11.598 "zerocopy_threshold": 0, 00:29:11.598 "tls_version": 0, 00:29:11.598 "enable_ktls": false 00:29:11.598 } 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "method": "sock_impl_set_options", 00:29:11.598 "params": { 00:29:11.598 "impl_name": "ssl", 00:29:11.598 "recv_buf_size": 4096, 00:29:11.598 "send_buf_size": 4096, 00:29:11.598 "enable_recv_pipe": true, 00:29:11.598 "enable_quickack": false, 00:29:11.598 "enable_placement_id": 0, 00:29:11.598 "enable_zerocopy_send_server": true, 00:29:11.598 "enable_zerocopy_send_client": false, 00:29:11.598 "zerocopy_threshold": 0, 00:29:11.598 "tls_version": 0, 00:29:11.598 "enable_ktls": false 00:29:11.598 } 00:29:11.598 } 00:29:11.598 ] 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "subsystem": "vmd", 00:29:11.598 "config": [] 00:29:11.598 }, 00:29:11.598 { 00:29:11.598 "subsystem": "accel", 00:29:11.598 "config": [ 00:29:11.598 { 00:29:11.598 "method": "accel_set_options", 00:29:11.598 "params": { 00:29:11.598 "small_cache_size": 128, 00:29:11.599 "large_cache_size": 16, 00:29:11.599 "task_count": 2048, 00:29:11.599 "sequence_count": 2048, 00:29:11.599 "buf_count": 2048 00:29:11.599 } 00:29:11.599 } 00:29:11.599 ] 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "subsystem": "bdev", 00:29:11.599 "config": [ 00:29:11.599 { 00:29:11.599 "method": "bdev_set_options", 00:29:11.599 "params": { 00:29:11.599 "bdev_io_pool_size": 65535, 00:29:11.599 "bdev_io_cache_size": 256, 00:29:11.599 "bdev_auto_examine": true, 00:29:11.599 "iobuf_small_cache_size": 128, 00:29:11.599 "iobuf_large_cache_size": 16 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_raid_set_options", 00:29:11.599 "params": { 00:29:11.599 "process_window_size_kb": 1024 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_iscsi_set_options", 00:29:11.599 "params": { 00:29:11.599 "timeout_sec": 30 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_nvme_set_options", 00:29:11.599 "params": { 00:29:11.599 "action_on_timeout": "none", 00:29:11.599 "timeout_us": 0, 00:29:11.599 "timeout_admin_us": 0, 00:29:11.599 "keep_alive_timeout_ms": 10000, 00:29:11.599 "arbitration_burst": 0, 00:29:11.599 "low_priority_weight": 0, 00:29:11.599 "medium_priority_weight": 0, 00:29:11.599 "high_priority_weight": 0, 00:29:11.599 "nvme_adminq_poll_period_us": 10000, 00:29:11.599 "nvme_ioq_poll_period_us": 0, 00:29:11.599 "io_queue_requests": 512, 00:29:11.599 "delay_cmd_submit": true, 00:29:11.599 "transport_retry_count": 4, 00:29:11.599 "bdev_retry_count": 3, 00:29:11.599 "transport_ack_timeout": 0, 00:29:11.599 "ctrlr_loss_timeout_sec": 0, 00:29:11.599 "reconnect_delay_sec": 0, 00:29:11.599 "fast_io_fail_timeout_sec": 0, 00:29:11.599 "disable_auto_failback": false, 00:29:11.599 "generate_uuids": false, 00:29:11.599 "transport_tos": 0, 00:29:11.599 "nvme_error_stat": false, 00:29:11.599 "rdma_srq_size": 0, 00:29:11.599 "io_path_stat": false, 00:29:11.599 "allow_accel_sequence": false, 00:29:11.599 "rdma_max_cq_size": 0, 00:29:11.599 "rdma_cm_event_timeout_ms": 0, 00:29:11.599 "dhchap_digests": [ 00:29:11.599 "sha256", 00:29:11.599 "sha384", 00:29:11.599 "sha512" 00:29:11.599 ], 00:29:11.599 "dhchap_dhgroups": [ 00:29:11.599 "null", 00:29:11.599 "ffdhe2048", 00:29:11.599 "ffdhe3072", 00:29:11.599 "ffdhe4096", 00:29:11.599 "ffdhe6144", 00:29:11.599 "ffdhe8192" 00:29:11.599 ] 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_nvme_attach_controller", 00:29:11.599 "params": { 00:29:11.599 "name": "nvme0", 00:29:11.599 "trtype": "TCP", 00:29:11.599 "adrfam": "IPv4", 00:29:11.599 "traddr": "127.0.0.1", 00:29:11.599 "trsvcid": "4420", 00:29:11.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:11.599 "prchk_reftag": false, 00:29:11.599 "prchk_guard": false, 00:29:11.599 "ctrlr_loss_timeout_sec": 0, 00:29:11.599 "reconnect_delay_sec": 0, 00:29:11.599 "fast_io_fail_timeout_sec": 0, 00:29:11.599 "psk": "key0", 00:29:11.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:11.599 "hdgst": false, 00:29:11.599 "ddgst": false 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_nvme_set_hotplug", 00:29:11.599 "params": { 00:29:11.599 "period_us": 100000, 00:29:11.599 "enable": false 00:29:11.599 } 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "method": "bdev_wait_for_examine" 00:29:11.599 } 00:29:11.599 ] 00:29:11.599 }, 00:29:11.599 { 00:29:11.599 "subsystem": "nbd", 00:29:11.599 "config": [] 00:29:11.599 } 00:29:11.599 ] 00:29:11.599 }' 00:29:11.599 [2024-04-27 01:03:04.240893] Starting SPDK v24.05-pre git sha1 d4fbb5733 / DPDK 23.11.0 initialization... 00:29:11.599 [2024-04-27 01:03:04.241007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2958099 ] 00:29:11.857 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.857 [2024-04-27 01:03:04.351977] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.857 [2024-04-27 01:03:04.447359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.115 [2024-04-27 01:03:04.685253] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:29:12.374 01:03:04 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:12.374 01:03:04 -- common/autotest_common.sh@850 -- # return 0 00:29:12.374 01:03:04 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:29:12.374 01:03:04 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.374 01:03:04 -- keyring/file.sh@120 -- # jq length 00:29:12.631 01:03:05 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:29:12.631 01:03:05 -- keyring/file.sh@121 -- # get_refcnt key0 00:29:12.631 01:03:05 -- keyring/common.sh@12 -- # get_key key0 00:29:12.631 01:03:05 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.631 01:03:05 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:29:12.631 01:03:05 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.631 01:03:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.631 01:03:05 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:29:12.631 01:03:05 -- keyring/file.sh@122 -- # get_refcnt key1 00:29:12.631 01:03:05 -- keyring/common.sh@12 -- # jq -r .refcnt 00:29:12.631 01:03:05 -- keyring/common.sh@12 -- # get_key key1 00:29:12.631 01:03:05 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:29:12.631 01:03:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:29:12.631 01:03:05 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:29:12.891 01:03:05 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:29:12.891 01:03:05 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:29:12.891 01:03:05 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:29:12.891 01:03:05 -- keyring/common.sh@8 -- # /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:29:12.891 01:03:05 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:29:12.891 01:03:05 -- keyring/file.sh@1 -- # cleanup 00:29:12.891 01:03:05 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.L3J02U4o4h /tmp/tmp.3PGxk4DNdf 00:29:12.891 01:03:05 -- keyring/file.sh@20 -- # killprocess 2958099 00:29:12.891 01:03:05 -- common/autotest_common.sh@936 -- # '[' -z 2958099 ']' 00:29:12.891 01:03:05 -- common/autotest_common.sh@940 -- # kill -0 2958099 00:29:12.891 01:03:05 -- common/autotest_common.sh@941 -- # uname 00:29:12.891 01:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:12.891 01:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2958099 00:29:12.891 01:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:12.891 01:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:12.891 01:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2958099' 00:29:12.891 killing process with pid 2958099 00:29:12.891 01:03:05 -- common/autotest_common.sh@955 -- # kill 2958099 00:29:12.891 Received shutdown signal, test time was about 1.000000 seconds 00:29:12.891 00:29:12.891 Latency(us) 00:29:12.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.891 =================================================================================================================== 00:29:12.891 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:29:12.891 01:03:05 -- common/autotest_common.sh@960 -- # wait 2958099 00:29:13.460 01:03:05 -- keyring/file.sh@21 -- # killprocess 2956310 00:29:13.460 01:03:05 -- common/autotest_common.sh@936 -- # '[' -z 2956310 ']' 00:29:13.460 01:03:05 -- common/autotest_common.sh@940 -- # kill -0 2956310 00:29:13.460 01:03:05 -- common/autotest_common.sh@941 -- # uname 00:29:13.460 01:03:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:13.460 01:03:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 2956310 00:29:13.460 01:03:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:13.460 01:03:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:13.460 01:03:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 2956310' 00:29:13.460 killing process with pid 2956310 00:29:13.460 01:03:05 -- common/autotest_common.sh@955 -- # kill 2956310 00:29:13.460 [2024-04-27 01:03:05.996628] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:29:13.460 01:03:05 -- common/autotest_common.sh@960 -- # wait 2956310 00:29:14.393 00:29:14.393 real 0m11.267s 00:29:14.393 user 0m24.957s 00:29:14.393 sys 0m2.601s 00:29:14.393 01:03:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:14.393 01:03:06 -- common/autotest_common.sh@10 -- # set +x 00:29:14.393 ************************************ 00:29:14.393 END TEST keyring_file 00:29:14.393 ************************************ 00:29:14.393 01:03:06 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:29:14.393 01:03:06 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:29:14.393 01:03:06 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:29:14.393 01:03:06 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:29:14.393 01:03:06 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:29:14.393 01:03:06 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:29:14.393 01:03:06 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:29:14.393 01:03:06 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:29:14.393 01:03:06 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:14.393 01:03:06 -- common/autotest_common.sh@10 -- # set +x 00:29:14.393 01:03:06 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:29:14.393 01:03:06 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:29:14.393 01:03:06 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:29:14.393 01:03:06 -- common/autotest_common.sh@10 -- # set +x 00:29:19.664 INFO: APP EXITING 00:29:19.664 INFO: killing all VMs 00:29:19.664 INFO: killing vhost app 00:29:19.664 INFO: EXIT DONE 00:29:22.196 0000:c9:00.0 (8086 0a54): Already using the nvme driver 00:29:22.196 0000:74:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:f1:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:cb:00.0 (8086 0a54): Already using the nvme driver 00:29:22.196 0000:79:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:6f:01.0 (8086 0b25): Already using the idxd driver 00:29:22.196 0000:6f:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:f6:01.0 (8086 0b25): Already using the idxd driver 00:29:22.196 0000:f6:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:74:01.0 (8086 0b25): Already using the idxd driver 00:29:22.196 0000:6a:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.196 0000:79:01.0 (8086 0b25): Already using the idxd driver 00:29:22.196 0000:ec:01.0 (8086 0b25): Already using the idxd driver 00:29:22.454 0000:6a:01.0 (8086 0b25): Already using the idxd driver 00:29:22.454 0000:ca:00.0 (8086 0a54): Already using the nvme driver 00:29:22.454 0000:ec:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.454 0000:e7:01.0 (8086 0b25): Already using the idxd driver 00:29:22.454 0000:e7:02.0 (8086 0cfe): Already using the idxd driver 00:29:22.454 0000:f1:01.0 (8086 0b25): Already using the idxd driver 00:29:25.743 Cleaning 00:29:25.743 Removing: /var/run/dpdk/spdk0/config 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:29:25.743 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:29:25.744 Removing: /var/run/dpdk/spdk0/hugepage_info 00:29:25.744 Removing: /var/run/dpdk/spdk1/config 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:29:25.744 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:29:25.744 Removing: /var/run/dpdk/spdk1/hugepage_info 00:29:25.744 Removing: /var/run/dpdk/spdk2/config 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:29:25.744 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:29:25.744 Removing: /var/run/dpdk/spdk2/hugepage_info 00:29:25.744 Removing: /var/run/dpdk/spdk3/config 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:29:25.744 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:29:25.744 Removing: /var/run/dpdk/spdk3/hugepage_info 00:29:25.744 Removing: /var/run/dpdk/spdk4/config 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:29:25.744 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:29:25.744 Removing: /var/run/dpdk/spdk4/hugepage_info 00:29:25.744 Removing: /dev/shm/nvmf_trace.0 00:29:25.744 Removing: /dev/shm/spdk_tgt_trace.pid2542064 00:29:25.744 Removing: /var/run/dpdk/spdk0 00:29:25.744 Removing: /var/run/dpdk/spdk1 00:29:25.744 Removing: /var/run/dpdk/spdk2 00:29:25.744 Removing: /var/run/dpdk/spdk3 00:29:25.744 Removing: /var/run/dpdk/spdk4 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2535720 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2538651 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2542064 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2543002 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2544242 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2544700 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2545959 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2545977 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2546603 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2550488 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2553613 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2553996 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2554774 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2555449 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2555836 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2556161 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2556490 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2556841 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2557506 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2561026 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2561374 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2561706 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2562002 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2562669 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2562951 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2563801 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2563900 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2564243 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2564535 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2564873 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2564907 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2565867 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2566180 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2566582 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2569339 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2570920 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2572782 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2574843 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2576675 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2578780 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2580599 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2582643 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2584523 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2586440 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2588457 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2590293 00:29:25.744 Removing: /var/run/dpdk/spdk_pid2592882 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2594729 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2596693 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2598652 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2600468 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2602567 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2604393 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2606490 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2608321 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2610175 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2612238 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2614687 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2617547 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2621968 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2674427 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2679528 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2689897 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2696739 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2701259 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2702148 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2713432 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2713756 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2718614 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2725258 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2728305 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2740133 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2750939 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2753467 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2754638 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2774452 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2778969 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2783868 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2785909 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2788148 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2788333 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2788620 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2788927 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2789802 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2791960 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2793223 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2793862 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2796570 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2797350 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2798145 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2803538 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2810144 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2815145 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2824065 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2824068 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2830514 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2830714 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2831024 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2831600 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2831614 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2837025 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2837761 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2842993 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2846282 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2852610 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2858756 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2867654 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2867677 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2888667 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2891156 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2893552 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2895942 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2900159 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2900944 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2901676 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2902561 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2903991 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2904754 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2905569 00:29:26.005 Removing: /var/run/dpdk/spdk_pid2906268 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2907751 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2917538 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2917747 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2923835 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2926356 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2928920 00:29:26.006 Removing: /var/run/dpdk/spdk_pid2930550 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2933089 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2934742 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2945646 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2946240 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2946837 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2950237 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2950850 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2951465 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2956310 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2956329 00:29:26.265 Removing: /var/run/dpdk/spdk_pid2958099 00:29:26.265 Clean 00:29:26.265 01:03:18 -- common/autotest_common.sh@1437 -- # return 0 00:29:26.265 01:03:18 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:29:26.265 01:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:26.265 01:03:18 -- common/autotest_common.sh@10 -- # set +x 00:29:26.265 01:03:18 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:29:26.265 01:03:18 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:26.265 01:03:18 -- common/autotest_common.sh@10 -- # set +x 00:29:26.265 01:03:18 -- spdk/autotest.sh@385 -- # chmod a+r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:29:26.265 01:03:18 -- spdk/autotest.sh@387 -- # [[ -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log ]] 00:29:26.265 01:03:18 -- spdk/autotest.sh@387 -- # rm -f /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/udev.log 00:29:26.265 01:03:18 -- spdk/autotest.sh@389 -- # hash lcov 00:29:26.265 01:03:18 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:29:26.265 01:03:18 -- spdk/autotest.sh@391 -- # hostname 00:29:26.265 01:03:18 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/dsa-phy-autotest/spdk -t spdk-fcp-10 -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info 00:29:26.524 geninfo: WARNING: invalid characters removed from testname! 00:29:44.673 01:03:35 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:45.243 01:03:37 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:47.156 01:03:39 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:48.099 01:03:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:49.483 01:03:41 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:50.885 01:03:43 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/cov_total.info 00:29:51.827 01:03:44 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:29:52.088 01:03:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/common.sh 00:29:52.088 01:03:44 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:29:52.088 01:03:44 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:52.088 01:03:44 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:52.088 01:03:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.088 01:03:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.088 01:03:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.088 01:03:44 -- paths/export.sh@5 -- $ export PATH 00:29:52.088 01:03:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:52.088 01:03:44 -- common/autobuild_common.sh@434 -- $ out=/var/jenkins/workspace/dsa-phy-autotest/spdk/../output 00:29:52.088 01:03:44 -- common/autobuild_common.sh@435 -- $ date +%s 00:29:52.088 01:03:44 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714172624.XXXXXX 00:29:52.088 01:03:44 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714172624.m3RNrY 00:29:52.088 01:03:44 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:29:52.088 01:03:44 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:29:52.088 01:03:44 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/' 00:29:52.088 01:03:44 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp' 00:29:52.088 01:03:44 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/dsa-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:29:52.088 01:03:44 -- common/autobuild_common.sh@451 -- $ get_config_params 00:29:52.088 01:03:44 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:29:52.088 01:03:44 -- common/autotest_common.sh@10 -- $ set +x 00:29:52.088 01:03:44 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk' 00:29:52.088 01:03:44 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:29:52.088 01:03:44 -- pm/common@17 -- $ local monitor 00:29:52.088 01:03:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:52.089 01:03:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2969573 00:29:52.089 01:03:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:52.089 01:03:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2969575 00:29:52.089 01:03:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:52.089 01:03:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2969576 00:29:52.089 01:03:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:52.089 01:03:44 -- pm/common@21 -- $ date +%s 00:29:52.089 01:03:44 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=2969578 00:29:52.089 01:03:44 -- pm/common@26 -- $ sleep 1 00:29:52.089 01:03:44 -- pm/common@21 -- $ date +%s 00:29:52.089 01:03:44 -- pm/common@21 -- $ date +%s 00:29:52.089 01:03:44 -- pm/common@21 -- $ date +%s 00:29:52.089 01:03:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172624 00:29:52.089 01:03:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172624 00:29:52.089 01:03:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172624 00:29:52.089 01:03:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/dsa-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1714172624 00:29:52.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172624_collect-bmc-pm.bmc.pm.log 00:29:52.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172624_collect-cpu-temp.pm.log 00:29:52.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172624_collect-vmstat.pm.log 00:29:52.089 Redirecting to /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1714172624_collect-cpu-load.pm.log 00:29:53.032 01:03:45 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:29:53.032 01:03:45 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j128 00:29:53.032 01:03:45 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/dsa-phy-autotest/spdk 00:29:53.032 01:03:45 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:29:53.032 01:03:45 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:29:53.032 01:03:45 -- spdk/autopackage.sh@19 -- $ timing_finish 00:29:53.032 01:03:45 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:29:53.032 01:03:45 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:29:53.032 01:03:45 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/timing.txt 00:29:53.032 01:03:45 -- spdk/autopackage.sh@20 -- $ exit 0 00:29:53.032 01:03:45 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:29:53.032 01:03:45 -- pm/common@30 -- $ signal_monitor_resources TERM 00:29:53.032 01:03:45 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:29:53.032 01:03:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.033 01:03:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:29:53.033 01:03:45 -- pm/common@45 -- $ pid=2969590 00:29:53.033 01:03:45 -- pm/common@52 -- $ sudo kill -TERM 2969590 00:29:53.033 01:03:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.033 01:03:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:29:53.033 01:03:45 -- pm/common@45 -- $ pid=2969591 00:29:53.033 01:03:45 -- pm/common@52 -- $ sudo kill -TERM 2969591 00:29:53.033 01:03:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.033 01:03:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:29:53.033 01:03:45 -- pm/common@45 -- $ pid=2969592 00:29:53.033 01:03:45 -- pm/common@52 -- $ sudo kill -TERM 2969592 00:29:53.033 01:03:45 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:29:53.033 01:03:45 -- pm/common@44 -- $ [[ -e /var/jenkins/workspace/dsa-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:29:53.033 01:03:45 -- pm/common@45 -- $ pid=2969593 00:29:53.033 01:03:45 -- pm/common@52 -- $ sudo kill -TERM 2969593 00:29:53.293 + [[ -n 2422799 ]] 00:29:53.294 + sudo kill 2422799 00:29:53.306 [Pipeline] } 00:29:53.324 [Pipeline] // stage 00:29:53.330 [Pipeline] } 00:29:53.348 [Pipeline] // timeout 00:29:53.354 [Pipeline] } 00:29:53.373 [Pipeline] // catchError 00:29:53.378 [Pipeline] } 00:29:53.395 [Pipeline] // wrap 00:29:53.400 [Pipeline] } 00:29:53.417 [Pipeline] // catchError 00:29:53.426 [Pipeline] stage 00:29:53.429 [Pipeline] { (Epilogue) 00:29:53.444 [Pipeline] catchError 00:29:53.446 [Pipeline] { 00:29:53.462 [Pipeline] echo 00:29:53.464 Cleanup processes 00:29:53.470 [Pipeline] sh 00:29:53.761 + sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:29:53.762 2970110 sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:29:53.777 [Pipeline] sh 00:29:54.067 ++ sudo pgrep -af /var/jenkins/workspace/dsa-phy-autotest/spdk 00:29:54.067 ++ grep -v 'sudo pgrep' 00:29:54.067 ++ awk '{print $1}' 00:29:54.067 + sudo kill -9 00:29:54.067 + true 00:29:54.078 [Pipeline] sh 00:29:54.360 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:30:04.397 [Pipeline] sh 00:30:04.683 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:30:04.683 Artifacts sizes are good 00:30:04.697 [Pipeline] archiveArtifacts 00:30:04.704 Archiving artifacts 00:30:04.880 [Pipeline] sh 00:30:05.169 + sudo chown -R sys_sgci /var/jenkins/workspace/dsa-phy-autotest 00:30:05.183 [Pipeline] cleanWs 00:30:05.193 [WS-CLEANUP] Deleting project workspace... 00:30:05.193 [WS-CLEANUP] Deferred wipeout is used... 00:30:05.199 [WS-CLEANUP] done 00:30:05.201 [Pipeline] } 00:30:05.225 [Pipeline] // catchError 00:30:05.238 [Pipeline] sh 00:30:05.525 + logger -p user.info -t JENKINS-CI 00:30:05.533 [Pipeline] } 00:30:05.548 [Pipeline] // stage 00:30:05.553 [Pipeline] } 00:30:05.572 [Pipeline] // node 00:30:05.578 [Pipeline] End of Pipeline 00:30:05.612 Finished: SUCCESS